Patents by Inventor Gerald V. Wright, Jr.
Gerald V. Wright, Jr. has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11922652Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: GrantFiled: January 13, 2023Date of Patent: March 5, 2024Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 11847752Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: January 4, 2023Date of Patent: December 19, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 11756225Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: GrantFiled: September 16, 2020Date of Patent: September 12, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 11710284Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: December 14, 2021Date of Patent: July 25, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 11688147Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: December 14, 2021Date of Patent: June 27, 2023Assignee: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Publication number: 20230154036Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: ApplicationFiled: January 13, 2023Publication date: May 18, 2023Applicant: Campfire 3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20230143213Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non- transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: ApplicationFiled: January 4, 2023Publication date: May 11, 2023Inventors: AVI BAR-ZEEV, ALEXANDER TYURIN, GERALD V. WRIGHT, JR.
-
Patent number: 11587295Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: October 5, 2021Date of Patent: February 21, 2023Assignee: Meta View, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Publication number: 20220108537Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: ApplicationFiled: December 14, 2021Publication date: April 7, 2022Applicant: Campfire3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20220108538Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: ApplicationFiled: December 14, 2021Publication date: April 7, 2022Applicant: Campfire3D, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20220083307Abstract: In general, one aspect disclosed features a system comprising: a first user device configured to display virtual content, the first user device comprising one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: generate a first image depicting virtual content in a virtual location corresponding to a physical location in a physical environment of the first user device, display the first image in the one or more displays of the first user device, enable a user of the first user device to create media and associate that media with the virtual content in the first image in the form of an annotation, and store the annotation and virtual content, and make it available for access by a plurality of additional user devices.Type: ApplicationFiled: November 9, 2021Publication date: March 17, 2022Applicant: Meta View, Inc.Inventors: Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20220084235Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.Type: ApplicationFiled: September 16, 2020Publication date: March 17, 2022Applicant: Meta View, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Publication number: 20220084297Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: ApplicationFiled: October 5, 2021Publication date: March 17, 2022Applicant: Meta View, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
-
Patent number: 11176756Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.Type: GrantFiled: September 16, 2020Date of Patent: November 16, 2021Assignee: Meta View, Inc.Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
-
Patent number: 9924102Abstract: Techniques for managing applications associated with a mobile device are provided. The techniques disclosed herein include techniques for obtaining an image of an object in the view of a camera associated with a mobile device, identifying the object in the image based on attributes of the object extracted from the image, and determining whether one or more applications are associated with the object. If there are one or more applications associated with the real-world object, an application associated with the object can be automatically launched on the mobile device. The association between a real-world object and an application may be identified by a visual indicator, such as an icon, symbol, or other markings on the object that indicates that the object is associated with one or more applications.Type: GrantFiled: March 14, 2013Date of Patent: March 20, 2018Assignee: QUALCOMM IncorporatedInventors: Michael Gervautz, Gerald V. Wright, Jr., Roy Lawrence Ashok Inigo
-
Patent number: 9684989Abstract: A user interface transition between a camera view and a map view displayed on a mobile platform is provided so as present a clear visual connection between the orientation in the camera view and the orientation in the map view. The user interface transition may be in response to a request to change from the camera view to the map view or vice-versa. Augmentation overlays for the camera view and map view may be produced based on, e.g., the line of sight of the camera or identifiable environmental characteristics visible in the camera view and the map view. One or more different augmentation overlays are also produced and displayed to provide the visual connection between the camera view and map view augmentation overlays. For example, a plurality of augmentation overlays may be displayed consecutively to clearly illustrate the changes between the camera view and map view augmentation overlays.Type: GrantFiled: June 16, 2010Date of Patent: June 20, 2017Assignee: QUALCOMM IncorporatedInventors: Gerald V. Wright, Jr., Joel Simbulan Bernarte, Virginia Walker Keating
-
Patent number: 9667817Abstract: A master device images an object device and uses the image to identify the object device. The master device determines whether the object device is cable of being interfaced with based on the image and registers with the object device for interfacing. The master device then automatically interfaces with the identified object device. The master device may receive broadcast data from the object device including information about the visual appearance of the object device and use the broadcast data in the identification of the object device. The master device may retrieve data related to the object device and display the related data, which may be display the data over the displayed image of the object device. The master device may provide an interface to control the object device or be used to pass data to the object device.Type: GrantFiled: January 26, 2015Date of Patent: May 30, 2017Assignee: QUALCOMM IncorporatedInventors: Matthew S Grob, Serafin Diaz Spindola, Gerald V Wright, Jr., Virginia Walker Keating
-
Patent number: 9135735Abstract: Methods, apparatuses, and systems are provided to transition 3D space information detected in an Augmented Reality (AR) view of a mobile device to screen aligned information on the mobile device. In at least one implementation, a method includes determining augmentation information associated with an object of interest, including a Modelview (M1) matrix and a Projection (P1) matrix, displaying the augmentation information on top of a video image of the object of interest using the M1 and P1 matrices, generating a second Modelview (M2) matrix and a second Projection (P2) matrix, such that the matrices M2 and P2 represent the screen aligned final position of the augmentation information, and displaying the augmentation information using the M2 and P2 matrices.Type: GrantFiled: March 12, 2013Date of Patent: September 15, 2015Assignee: QUALCOMM IncorporatedInventors: Scott A. Leazenby, Eunjoo Kim, Per O. Nielsen, Gerald V. Wright, Jr., Erick Mendez Mendez, Michael Gervautz
-
Publication number: 20150138376Abstract: A master device images an object device and uses the image to identify the object device. The master device then automatically interfaces with the identified object device, for example, by pairing with the object device. The master device interfaces with a second object device and initiates an interface between the first object device and the second object device. The master device may receive broadcast data from the object device including information about the visual appearance of the object device and use the broadcast data in the identification of the object device. The master device may retrieve data related to the object device and display the related data, which may be display the data over the displayed image of the object device. The master device may provide an interface to control the object device or be used to pass data to the object device.Type: ApplicationFiled: January 26, 2015Publication date: May 21, 2015Inventors: Matthew S. Grob, Serafin Diaz Spindola, Gerald V. Wright, JR., Virginia Walker Keating
-
Patent number: 8971811Abstract: A master device images an object device and uses the image to identify the object device. The master device then automatically interfaces with the identified object device, for example, by pairing with the object device. The master device interfaces with a second object device and initiates an interface between the first object device and the second object device. The master device may receive broadcast data from the object device including information about the visual appearance of the object device and use the broadcast data in the identification of the object device. The master device may retrieve data related to the object device and display the related data, which may be display the data over the displayed image of the object device. The master device may provide an interface to control the object device or be used to pass data to the object device.Type: GrantFiled: July 18, 2014Date of Patent: March 3, 2015Assignee: QUALCOMM IncorporatedInventors: Matthew S. Grob, Serafin Diaz Spindola, Gerald V. Wright, Jr., Virginia Walker Keating