Patents by Inventor Peiqi Tang
Peiqi Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240403772Abstract: A method includes assessing a semantic-based query for a user that includes user goals and assessing probability values and first goal probability values, both of which are associated with active digital actions. The method includes generating a decision engine to determine a user friction value and second goal probability values associated with the user goals using the first goal probability values and the probability values. Further, the method includes determining the user friction value and the second goal probability values using the first goal probability values and the probability values. Moreover, the method includes determining a plan of digital actions based on the user friction value, the second goal probability values, and the user goals. Furthermore, the method includes, in response to determining the user friction value exceeds a predetermined threshold, generating a query to adjust the active digital actions based on the semantic-based query for the user.Type: ApplicationFiled: April 26, 2024Publication date: December 5, 2024Inventors: Benjamin Lafreniere, Peiqi Tang, Kashyap Todi, Tanya Renee Jonker, David Owen Driver
-
Patent number: 12125126Abstract: In particular embodiments, a computing system may receive an image comprising one or more virtual elements associated with a virtual environment and one or more real-world elements associated with a real-world environment. The system may determine a first metric and a second metric indicative of a measure of clutter in the virtual environment and the real-world environment, respectively. The system may determine gaze features associated with a user based on a user activity and predict, using a machine learning model, a reaction time of the user based on the gaze features. The system may determine a third metric indicative of the measure of clutter in the image based on predicted reaction time. The system may compute an overall clutter metric based on the first, second, and third metrics. The system may perform one or more actions to manage the clutter in the image based on the overall clutter metric.Type: GrantFiled: October 23, 2023Date of Patent: October 22, 2024Assignee: Meta Platforms Technologies, LLCInventors: Naveen Sendhilnathan, Ting Zhang, Sebastian Freitag, Tanya Renee Jonker, Peiqi Tang
-
Publication number: 20230046155Abstract: The disclosed computer-implemented method may include (1) identifying a trigger element within a field of view presented by a display element of an artificial reality device, (2) determining a position of the trigger element within the field of view, (3) selecting a position within the field of view for a virtual widget based on the position of the trigger element, and (4) presenting the virtual widget at the selected position via the display element. Various other methods, systems, and computer-readable media are also disclosed.Type: ApplicationFiled: May 18, 2022Publication date: February 16, 2023Inventors: Feiyu Lu, Mark Parent, Hiroshi Horii, Yan Xu, Peiqi Tang
-
Patent number: 11403829Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.Type: GrantFiled: February 24, 2021Date of Patent: August 2, 2022Assignee: A9.COM, INC.Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
-
Patent number: 11093748Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: GrantFiled: January 27, 2020Date of Patent: August 17, 2021Assignee: A9.COM, INC.Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Publication number: 20210183154Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.Type: ApplicationFiled: February 24, 2021Publication date: June 17, 2021Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
-
Patent number: 10943403Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.Type: GrantFiled: April 29, 2019Date of Patent: March 9, 2021Assignee: A9.com, Inc.Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
-
Publication number: 20200160058Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: ApplicationFiled: January 27, 2020Publication date: May 21, 2020Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Patent number: 10558857Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: GrantFiled: March 5, 2018Date of Patent: February 11, 2020Assignee: A9.COM, INC.Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Publication number: 20190272425Abstract: Various embodiments of the present disclosure provide systems and method for visual search and augmented reality, in which an onscreen body of visual markers overlayed on the interface signals the current state of an image recognition process. Specifically, the body of visual markers may take on a plurality of behaviors, in which a particular behavior is indicative of a particular state. Thus, the user can tell what the current state of the scanning process is by the behavior of the body of visual markers. The behavior of the body of visual markers may also indicate to the user recommended actions that can be taken to improve the scanning condition or otherwise facilitate the process. In various embodiments, as the scanning process goes from one state to another state, the onscreen body of visual markers may move or seamlessly transition from one behavior to another behavior, accordingly.Type: ApplicationFiled: March 5, 2018Publication date: September 5, 2019Inventors: Peiqi Tang, Andrea Zehr, Rupa Chaturvedi, Yu Lou, Colin Jon Taylor, Mark Scott Waldo, Shaun Michael Post
-
Publication number: 20190251753Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.Type: ApplicationFiled: April 29, 2019Publication date: August 15, 2019Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr
-
Patent number: 10319150Abstract: Users can view images or renderings of items placed (virtually) within a physical space. For example, a rendering of an item can be placed within a live camera view of the physical space. A snapshot of the physical space can be captured and the snapshot can be customized, shared, etc. The renderings can be represented as two-dimensional images, e.g., virtual stickers or three-dimensional models of the items. Users can have the ability to view different renderings, move those items around, and develop views of the physical space that may be desirable. The renderings can link to products offered through an electronic marketplace and those products can be consumed. Further, collaborative design is enabled through modeling the physical space and enabling users to view and move around the renderings in a virtual view of the physical space.Type: GrantFiled: May 15, 2017Date of Patent: June 11, 2019Assignee: A9.COM, INC.Inventors: Jason Canada, Rupa Chaturvedi, Jared Corso, Michael Patrick Cutter, Sean Niu, Shaun Michael Post, Peiqi Tang, Stefan Vant, Mark Scott Waldo, Andrea Zehr