Patents by Inventor Peiqi Tang

Peiqi Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240403772
    Abstract: A method includes assessing a semantic-based query for a user that includes user goals and assessing probability values and first goal probability values, both of which are associated with active digital actions. The method includes generating a decision engine to determine a user friction value and second goal probability values associated with the user goals using the first goal probability values and the probability values. Further, the method includes determining the user friction value and the second goal probability values using the first goal probability values and the probability values. Moreover, the method includes determining a plan of digital actions based on the user friction value, the second goal probability values, and the user goals. Furthermore, the method includes, in response to determining the user friction value exceeds a predetermined threshold, generating a query to adjust the active digital actions based on the semantic-based query for the user.
    Type: Application
    Filed: April 26, 2024
    Publication date: December 5, 2024
    Inventors: Benjamin Lafreniere, Peiqi Tang, Kashyap Todi, Tanya Renee Jonker, David Owen Driver
  • Patent number: 12125126
    Abstract: In particular embodiments, a computing system may receive an image comprising one or more virtual elements associated with a virtual environment and one or more real-world elements associated with a real-world environment. The system may determine a first metric and a second metric indicative of a measure of clutter in the virtual environment and the real-world environment, respectively. The system may determine gaze features associated with a user based on a user activity and predict, using a machine learning model, a reaction time of the user based on the gaze features. The system may determine a third metric indicative of the measure of clutter in the image based on predicted reaction time. The system may compute an overall clutter metric based on the first, second, and third metrics. The system may perform one or more actions to manage the clutter in the image based on the overall clutter metric.
    Type: Grant
    Filed: October 23, 2023
    Date of Patent: October 22, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Naveen Sendhilnathan, Ting Zhang, Sebastian Freitag, Tanya Renee Jonker, Peiqi Tang
  • Publication number: 20230046155
    Abstract: The disclosed computer-implemented method may include (1) identifying a trigger element within a field of view presented by a display element of an artificial reality device, (2) determining a position of the trigger element within the field of view, (3) selecting a position within the field of view for a virtual widget based on the position of the trigger element, and (4) presenting the virtual widget at the selected position via the display element. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Application
    Filed: May 18, 2022
    Publication date: February 16, 2023
    Inventors: Feiyu Lu, Mark Parent, Hiroshi Horii, Yan Xu, Peiqi Tang
  • Patent number: 11403829
    Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: August 2, 2022
    Assignee: A9.COM, INC.
    Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
  • Patent number: 11093748
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Grant
    Filed: January 27, 2020
    Date of Patent: August 17, 2021
    Assignee: A9.COM, INC.
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Publication number: 20210183154
    Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.
    Type: Application
    Filed: February 24, 2021
    Publication date: June 17, 2021
    Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
  • Patent number: 10943403
    Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: March 9, 2021
    Assignee: A9.com, Inc.
    Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
  • Publication number: 20200160058
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Application
    Filed: January 27, 2020
    Publication date: May 21, 2020
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Patent number: 10558857
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: February 11, 2020
    Assignee: A9.COM, INC.
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Publication number: 20190272425
    Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.
    Type: Application
    Filed: March 5, 2018
    Publication date: September 5, 2019
    Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
  • Publication number: 20190251753
    Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.
    Type: Application
    Filed: April 29, 2019
    Publication date: August 15, 2019
    Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
  • Patent number: 10319150
    Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: June 11, 2019
    Assignee: A9.COM, INC.
    Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr