Patents by Inventor Weston Welge

Weston Welge has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11974060
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using, stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vison, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: April 30, 2024
    Assignee: Snap Inc.
    Inventors: Sagi Katz, Daniel Wagner, Weston Welge
  • Publication number: 20240111156
    Abstract: A system for deformation or bending correction in an Augmented Reality (AR) system. Sensors are positioned in a frame of a head-worn AR system to sense forces or pressure acting on the frame by temple pieces attached to the frame. The sensed forces or pressure are used in conjunction with a model of the frame to determine a corrected model of the frame. The corrected model is used to correct video data captured by the AR system and to correct a video virtual overlay that is provided to a user wearing the head-worn AR system.
    Type: Application
    Filed: October 4, 2022
    Publication date: April 4, 2024
    Inventors: Matthias Kalkgruber, Tiago Miguel Pereira Torres, Weston Welge, Ramzi Zahreddine
  • Publication number: 20240103631
    Abstract: A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.
    Type: Application
    Filed: July 24, 2023
    Publication date: March 28, 2024
    Inventors: Daniel Colascione, Patrick Timothy McSweeney Simons, Weston Welge, Ramzi Zahreddine
  • Publication number: 20230367137
    Abstract: An electronic eyewear device with a shape memory alloy (SMA) actuator to apply torque to the eyewear temples. The torque presses the eyewear temples against the side of a user's head for a snug and comfortable fit. The SMA actuator allows one size of eyewear frames to fit a larger range of users, thereby reducing the number of sizes required to be manufactured for the electronic eyewear device.
    Type: Application
    Filed: May 10, 2022
    Publication date: November 16, 2023
    Inventors: Kenneth Kubala, Weston Welge
  • Patent number: 11747912
    Abstract: A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.
    Type: Grant
    Filed: September 22, 2022
    Date of Patent: September 5, 2023
    Assignee: Snap Inc.
    Inventors: Daniel Colascione, Patrick Timothy McSweeney Simons, Weston Welge, Ramzi Zahreddine
  • Patent number: 11722630
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vision, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: August 8, 2023
    Assignee: Snap Inc.
    Inventors: Sagi Katz, Daniel Wagner, Weston Welge
  • Publication number: 20230239423
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using, stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vison, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
    Type: Application
    Filed: March 30, 2023
    Publication date: July 27, 2023
    Inventors: Sagi Katz, Daniel Wagner, Weston Welge
  • Publication number: 20230188691
    Abstract: A miniaturized active dual pixel stereo system and method for close range depth extraction includes a projector adapted to project a locally distinct projected pattern onto an image of a scene and a dual pixel sensor including a dual pixel sensor array that generates respective displaced images of the scene. A three-dimensional image is generated from the displaced images of the scene by projecting the locally distinct projected pattern onto the image of the scene, capturing the respective displaced images of the scene using the dual pixel sensor, generating disparity images from the respective displaced images of the scene, determining depth to each pixel of the disparity images, and generating the three-dimensional image from the determined depth to each pixel. A three-dimensional image of a user's hands generated by the active dual pixel stereo system may be processed by gesture recognition software to provide an input to an electronic eyewear device.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 15, 2023
    Inventors: Robert John Hergert, Sagi Katz, Gilad Refael, Daniel Wagner, Weston Welge, Ramzi Zahreddine
  • Publication number: 20220377209
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for varied depth determination using stereo vision and phase detection auto focus (PDAF). Computer stereo vision (stereo vision) is used to extract three-dimensional information from digital images. To utilize stereo vision, two optical sensors are displaced horizontally from one another and used to capture images depicting two differing views of a real-world environment from two different vantage points. The relative depth of the objects captured in the images is determined using triangulation by comparing the relative positions of the objects in the two images. For example, the relative positions of matching objects (e.g., features) identified in the captured images are used along with the known orientation of the optical sensors (e.g., distance between the optical sensors, vantage points the optical sensors) to estimate the depth of the objects.
    Type: Application
    Filed: May 17, 2022
    Publication date: November 24, 2022
    Inventors: Sagi Katz, Daniel Wagner, Weston Welge