Patents by Inventor Kevin Joseph Sheridan

Kevin Joseph Sheridan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240126381
    Abstract: A computer-implemented method, comprising accessing an image comprising a handheld device, wherein the image is captured by one or more cameras associated with the computing device, generating a cropped image that comprises a hand of a user or the handheld device from the image by processing the image, generating a vision-based six degrees of freedom (6DoF) pose estimation for the handheld device by processing the cropped image, metadata associated with the image, and first sensor data from one or more sensors associated with the handheld device, generating a map-based 6DoF pose estimation using the handheld device, and generating a final 6DoF pose estimation for the handheld device based on the vision-based 6DoF pose estimation and the map-based 6DoF pose estimation generated using the handheld device.
    Type: Application
    Filed: October 13, 2023
    Publication date: April 18, 2024
    Inventors: Hemanth Korrapati, Kevin Joseph Sheridan, Zachary Jeremy Taylor, Andrew Melim, Sheng Shen
  • Publication number: 20230290089
    Abstract: In one embodiment, a method includes capturing images of a first user wearing a VR display device in a real-world environment. The method includes receiving a VR rendering of a VR environment. The VR rendering is from the perspective of the mobile computing device with respect to the VR display device. The method includes generating a first MR rendering of the first user in the VR environment. The first MR rendering of the first user is based on a compositing of the images of the first user and the VR rendering. The method includes receiving an indication of a user interaction with one or more elements of the VR environment in the first MR rendering. The method includes generating, in real-time responsive to the indication of the user interaction with the one or more elements, a second MR rendering of the first user in the VR environment. The one or more elements are modified according to the interaction.
    Type: Application
    Filed: May 22, 2023
    Publication date: September 14, 2023
    Inventors: Sarah Tanner Simpson, Gregory Smith, Jeffrey Witthuhn, Ying-Chieh Huang, Shuang Li, Wenliang Zhao, Peter Koch, Meghana Reddy Guduru, Ioannis Pavlidis, Xiang Wei, Kevin Xiao, Kevin Joseph Sheridan, Bodhi Keanu Donselaar, Federico Adrian Camposeco Paulsen
  • Publication number: 20230186570
    Abstract: A method includes generating a local map of a real environment, the local map being defined by first spatial relationships between first feature descriptors corresponding to a visible feature in the real environment captured by a device. The device receives a downloaded map defined by second spatial relationships between an anchor point and second feature descriptors corresponding to visible features captured by another device, wherein the anchor point corresponds to a location of a virtual object. The local map is updated by merging the downloaded map with the local map based on a comparison between the first feature descriptors and the second feature descriptors and a pose of the device is determined relative to a particular feature descriptor in the updated local map. Virtual content is rendered based on the pose and one or more spatial relationships linking the particular feature descriptor and the anchor in the updated local map.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 15, 2023
    Inventors: Zachary Michael Moratto, Kevin Joseph Sheridan, Cheyne Mathey-Owens, Adam Ritenauer
  • Patent number: 11676348
    Abstract: In one embodiment, a method includes using one or more cameras of a mobile computing device to capture one or more images of a first user wearing a VR display device in a real-world environment. The mobile computing device transmits a pose of the mobile computing device with respect to the VR display device to a VR system. The mobile computing device receives from the VR system a VR rendering of a VR environment. The VR rendering is from the perspective of the mobile computing device with respect to the VR display device. The method includes segmenting the first user from the one or more images and generating, in real-time responsive to capturing the one or more images, a MR rendering of the first user in the VR environment. The MR rendering of the first user is based on a compositing of the segmented one or more images of the first user and the VR rendering.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: June 13, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Sarah Tanner Simpson, Gregory Smith, Jeffrey Witthuhn, Ying-Chieh Huang, Shuang Li, Wenliang Zhao, Peter Koch, Meghana Reddy Guduru, Ioannis Pavlidis, Xiang Wei, Kevin Xiao, Kevin Joseph Sheridan, Bodhi Keanu Donselaar, Federico Adrian Camposeco Paulsen
  • Publication number: 20220392169
    Abstract: In one embodiment, a method includes using one or more cameras of a mobile computing device to capture one or more images of a first user wearing a VR display device in a real-world environment. The mobile computing device transmits a pose of the mobile computing device with respect to the VR display device to a VR system. The mobile computing device receives from the VR system a VR rendering of a VR environment. The VR rendering is from the perspective of the mobile computing device with respect to the VR display device. The method includes segmenting the first user from the one or more images and generating, in real-time responsive to capturing the one or more images, a MR rendering of the first user in the VR environment. The MR rendering of the first user is based on a compositing of the segmented one or more images of the first user and the VR rendering.
    Type: Application
    Filed: June 2, 2021
    Publication date: December 8, 2022
    Inventors: Sarah Tanner Simpson, Gregory Smith, Jeffrey Witthuhn, Ying-Chieh Huang, Shuang Li, Wenliang Zhao, Peter Koch, Meghana Reddy Guduru, Ioannis Pavlidis, Xiang Wei, Kevin Xiao, Kevin Joseph Sheridan, Bodhi Keanu Donselaar, Federico Adrian Camposeco Paulsen