Patents by Inventor Yu Lou

Yu Lou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200334882
    Abstract: Approaches in accordance with various embodiments provide for the presentation of augmented reality (AR) content with respect to optically challenging surfaces. Such surfaces can be difficult to locate using conventional optical-based approaches that rely on visible features. Embodiments can utilize the fact that horizontal surfaces can be located relatively easily, and can determine intersections or boundaries of those horizontal surfaces that likely indicate the presence of another surface, such as a vertical wall. This boundary can be determined automatically, through user input, or using a combination of such approaches. Once such an intersection is located, a virtual plane can be determined whose relative location to a device displaying AR content can be tracked and used as a reference for displaying AR content.
    Type: Application
    Filed: June 17, 2020
    Publication date: October 22, 2020
    Inventors: Jesse Chang, Jared Corso, Xing Zhang, Arnab Sanat Kumar Dhua, Yu Lou, Jason Freund
  • Patent number: 10755485
    Abstract: Systems and methods for displaying 3D containers in a computer generated environment are described. A computing device may provide a user with a catalog of objects which may be purchased. In order to view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. As discussed herein, the 3D container may be located and oriented based on a two-dimensional marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 25, 2020
    Assignee: A9.com, Inc.
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Yu Lou, Chun-Kai Wang, Sudeshna Pantham, Himanshu Arora, Xi Zhang
  • Publication number: 20200258144
    Abstract: A camera is used to capture image data of representations of a physical environment. Planes and surfaces are determined from a representation. The planes and the surfaces are analyzed using relationships there between to obtain shapes and depth information for available spaces within the physical environment. Locations of the camera with respect to the physical environment are determined. The shapes and the depth information are analyzed using a trained neural network to determine items fitting the available spaces. A live camera view is overlaid with a selection from the items to provide an augmented reality (AR) view of the physical environment from an individual location of the locations. The AR view is enabled so that a user can port to a different location than the individual location by an input received to the AR view while the selection from the items remains anchored to the individual location.
    Type: Application
    Filed: February 11, 2019
    Publication date: August 13, 2020
    Inventors: Rupa Chaturvedi, Xing Zhang, Frank Partalis, Yu Lou, Colin Jon Taylor, Simon Fox
  • Patent number: 10733801
    Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: August 4, 2020
    Assignee: A9.com. Inc.
    Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
  • Patent number: 10726597
    Abstract: Approaches in accordance with various embodiments provide for the presentation of augmented reality (AR) content with respect to optically challenging surfaces. Such surfaces can be difficult to locate using conventional optical-based approaches that rely on visible features. Embodiments can utilize the fact that horizontal surfaces can be located relatively easily, and can determine intersections or boundaries of those horizontal surfaces that likely indicate the presence of another surface, such as a vertical wall. This boundary can be determined automatically, through user input, or using a combination of such approaches. Once such an intersection is located, a virtual plane can be determined whose relative location to a device displaying AR content can be tracked and used as a reference for displaying AR content.
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: July 28, 2020
    Assignee: A9.com, Inc.
    Inventors: Jesse Chang, Jared Corso, Xing Zhang, Arnab Sanat Kumar Dhua, Yu Lou, Jason Freund
  • Publication number: 20200160058
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Application
    Filed: January 27, 2020
    Publication date: May 21, 2020
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Publication number: 20200068132
    Abstract: Various embodiments enable a computing device to perform tasks such as processing an image to recognize text or an object in an image to identify a particular product or related products associated with the text or object. In response to recognizing the text or the object as being associated with a product available for purchase from an electronic marketplace, one or more advertisements or product listings associated with the product can be displayed to the user. Accordingly, additional information for the associated product can be displayed, enabling the user to learn more about and purchase the product from the electronic marketplace through the portable computing device.
    Type: Application
    Filed: November 4, 2019
    Publication date: February 27, 2020
    Inventors: XIAOFAN LIN, ARNAB SANAT KUMAR DHUA, DOUGLAS RYAN GRAY, ATUL KUMAR, YU LOU
  • Patent number: 10558857
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: February 11, 2020
    Assignee: A9.COM, INC.
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Patent number: 10506168
    Abstract: Various embodiments enable a computing device to perform tasks such as processing an image to recognize text or an object in an image to identify a particular product or related products associated with the text or object. In response to recognizing the text or the object as being associated with a product available for purchase from an electronic marketplace, one or more advertisements or product listings associated with the product can be displayed to the user. Accordingly, additional information for the associated product can be displayed, enabling the user to learn more about and purchase the product from the electronic marketplace through the portable computing device.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: December 10, 2019
    Assignee: A9.COM, INC.
    Inventors: Xiaofan Lin, Arnab Sanat Kumar Dhua, Douglas Ryan Gray, Atul Kumar, Yu Lou
  • Publication number: 20190272425
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Application
    Filed: March 5, 2018
    Publication date: September 5, 2019
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Publication number: 20190236846
    Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.
    Type: Application
    Filed: April 15, 2019
    Publication date: August 1, 2019
    Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
  • Patent number: 10339714
    Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: July 2, 2019
    Assignee: A9.COM, INC.
    Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
  • Publication number: 20190156585
    Abstract: Systems and methods for displaying 3D containers in a computer generated environment are described. A computing device may provide a user with a catalog of objects which may be purchased. In order to view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. As discussed herein, the 3D container may be located and oriented based on a two-dimensional marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object.
    Type: Application
    Filed: January 28, 2019
    Publication date: May 23, 2019
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Yu Lou, Chun-Kai Wang, Sudeshna Pantham, Himanshu Arora, Xi Zhang
  • Patent number: 10192364
    Abstract: Systems and methods for displaying 3D containers in a computer generated environment are described. A computing device may provide a user with a catalog of objects which may be purchased. In order to view what an object may look like prior to purchasing the object, a computing device may show a 3D container that has the same dimensions as the object. As discussed herein, the 3D container may be located and oriented based on a two-dimensional marker. Moreover, some 3D containers may contain a representation of an object, which may be a 2D image of the object.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: January 29, 2019
    Assignee: A9.COM, INC.
    Inventors: David Creighton Mott, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Yu Lou, Chun-Kai Wang, Sudeshna Pantham, Himanshu Arora, Xi Zhang
  • Patent number: 10169629
    Abstract: Various algorithms are presented that enable an image of a data matrix to be analyzed and decoded for use in obtaining information about an object or item associated with the data matrix. The algorithms can account for variations in position and/or alignment of the data matrix. In one approach, the image is analyzed to determine a connected region of pixels. The connected region of pixels can be analyzed to determine a pair of pixels, included in the connected region of pixels, that is separated a greatest distance wherein a first pixel and second pixel of the pair of pixels is associated with image coordinates. Using the image coordinates of the pair of pixels, a potential area of the image that includes the visual code can be determined and the potential area can be analyzed to verify the presence of a potential data matrix.
    Type: Grant
    Filed: August 3, 2017
    Date of Patent: January 1, 2019
    Assignee: A9.com, Inc.
    Inventors: Chun-Kai Wang, Yu Lou
  • Publication number: 20180330544
    Abstract: Systems and methods for a markerless approach to displaying an image of a virtual object in an environment are described. A computing device is used to capture an image of a real-world environment; for example including a feature-rich planar surface. One or more virtual objects which do not exist in the real-world environment are displayed in the image, such as by being positioned in a manner that they appear to be resting on the planar surface, based at least on a sensor bias value and scale information obtained by capturing multiple image views of the real-world environment.
    Type: Application
    Filed: May 9, 2017
    Publication date: November 15, 2018
    Inventors: Nicholas Corso, Michael Patrick Cutter, Yu Lou, Sean Niu, Shaun Michael Post, Colin Jon Taylor, Mark Scott Waldo
  • Patent number: 10038839
    Abstract: Various approaches provide for detecting and recognizing text to enable a user to perform various functions or tasks. For example, a user could point a camera at an object with text, in order to capture an image of that object. The camera can be integrated with a portable computing device that is capable of taking the image and processing the image (or providing the image for processing) to recognize, identify, and/or isolate the text in order to send the image of the object as well as recognized text to an application, function, or system, such as an electronic marketplace.
    Type: Grant
    Filed: June 1, 2017
    Date of Patent: July 31, 2018
    Assignee: A.9.com, Inc.
    Inventors: Adam Wiggen Kraft, Kathy Wing Lam Ma, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Patent number: 10013624
    Abstract: Various embodiments enable the identification of semi-structured text entities in an imager. The identification of the text entities is a relatively simple problem when the text is stored in a computer and free of errors, but much more challenging if the source is the output of an optical character recognition (OCR) engine from a natural scene image. Accordingly, output from an OCR engine is analyzed to isolate a character string indicative of a text entity. Each character of the string is then assigned to a character class to produce a character class string and the text entity of the string is identified based in part on a pattern of the character class string.
    Type: Grant
    Filed: December 16, 2015
    Date of Patent: July 3, 2018
    Assignee: A9.com, Inc.
    Inventors: Douglas Ryan Gray, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Patent number: 9934526
    Abstract: Various embodiments enable a process to automatically attempt to select the most relevant words associated with products available for purchase from an electronic marketplace from an image frame. For example, an image frame containing text can be obtained and analyzed with an optical character recognition. The recognized words can then be preprocessed using various filtering and scoring techniques to narrow down a volume of text to a few relevant query terms. These query terms can then be sent to a search engine associated with the electronic marketplace to return relevant products to a user.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: April 3, 2018
    Assignee: A9.com, INC.
    Inventors: Arnab Sunat Kumar Dhua, Douglas Ryan Gray, Xiaofan Lin, Yu Lou, Adam Wiggen Kraft, Sunil Ramesh
  • Patent number: 9922431
    Abstract: Approaches are described for rendering augmented reality overlays on an interface displaying the active field of view of a camera. The interface can display to a user an image or video, for example, and the overlay can be rendered over, near, or otherwise positioned with respect to any text or other such elements represented in the image. The overlay can have associated therewith at least one function or information, and when an input associated with the overlay is selected, the function can be performed (or caused to be performed) by the portable computing device.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: March 20, 2018
    Assignee: A9.com, Inc.
    Inventors: Douglas R. Gray, Arnab S. Dhua, Yu Lou, Sunil Ramesh