Patents by Inventor Panya Inversin

Panya Inversin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11914836
    Abstract: A method includes accessing an image of a physical environment of a user, the image depicting a physical input device and a physical hand of the user, determining that a contrast between the physical input device and the physical hand depicted in the image is lower than a predetermined threshold, modifying the image to increase the contrast, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane, generating, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: February 27, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Patent number: 11902288
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Grant
    Filed: February 9, 2023
    Date of Patent: February 13, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael James Lebeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Björn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Patent number: 11887249
    Abstract: A method includes receiving video data of a user, the video data comprising a first captured image and a second captured image, generating a two-dimensional planar proxy of the user, determining a pose comprising a location and orientation of the two-dimensional planar proxy within a three-dimensional virtual environment, rendering one or more display images for one or more displays of an artificial-reality device based on the two-dimensional planar proxy having the determined pose and at least one of the first and second captured images, displaying the rendered one or more display images using the one or more displays, respectively, determining that a viewing angle of the artificial-reality device relative to the two-dimensional planar proxy exceeds a predetermined maximum threshold, and based on the determination that the viewing angle exceeds the predetermined maximum threshold, ceasing to display the one or more display images.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: January 30, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Sorkine Hornung, Panya Inversin
  • Patent number: 11770384
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Grant
    Filed: February 18, 2022
    Date of Patent: September 26, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael James Lebeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Björn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20230259194
    Abstract: In one embodiment, a method includes capturing, by a first VR display device, one or more frames of a shared real-world environment. The VR display device identifies one or more anchor points within the shared real-world environment from the one or more frames. The first VR display device receives localization information with respect to a second VR display device in the shared real-world environment and determines a pose of the first VR display device with respect to the second VR display device based on the localization information. A first output image is rendered for one or more displays of the first VR display device. The rendered image may comprise a proximity warning with respect to the second VR display device based on determining the pose of the first VR display device with respect to the second VR display device is within a threshold distance.
    Type: Application
    Filed: February 16, 2022
    Publication date: August 17, 2023
    Inventors: David Frederick Geisert, Alessia Marra, Gioacchino Noris, Panya Inversin
  • Publication number: 20230188533
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: February 9, 2023
    Publication date: June 15, 2023
    Applicant: Meta Platforms Technologies, LLC
    Inventors: Michael James LEBEAU, Manuel Ricardo FREIRE SANTOS, Aleksejs ANPILOGOVS, Alexander SORKINE HORNUNG, Björn WANBO, Connor TREACY, Fangwei LEE, Federico RUIZ, Jonathan MALLINSON, Jonathan Richard MAYOH, Marcus TANNER, Panya INVERSIN, Sarthak RAY, Sheng SHEN, William Arthur Hugh STEPTOE, Alessia MARRA, Gioacchino NORIS, Derrick READINGER, Jeffrey Wai-King LOCK, Jeffrey WITTHUHN, Jennifer Lynn SPURLOCK, Larissa Heike LAICH, Javier Alejandro Sierra SANTOS
  • Publication number: 20230125961
    Abstract: A method includes receiving video data of a user, the video data comprising a first captured image and a second captured image, generating a two-dimensional planar proxy of the user, determining a pose comprising a location and orientation of the two-dimensional planar proxy within a three-dimensional virtual environment, rendering one or more display images for one or more displays of an artificial-reality device based on the two-dimensional planar proxy having the determined pose and at least one of the first and second captured images, displaying the rendered one or more display images using the one or more displays, respectively, determining that a viewing angle of the artificial-reality device relative to the two-dimensional planar proxy exceeds a predetermined maximum threshold, and based on the determination that the viewing angle exceeds the predetermined maximum threshold, ceasing to display the one or more display images.
    Type: Application
    Filed: December 22, 2022
    Publication date: April 27, 2023
    Inventors: Alexander Sorkine Hornung, Panya Inversin
  • Publication number: 20230131667
    Abstract: A method includes accessing an image of a physical environment of a user, the image depicting a physical input device and a physical hand of the user, determining that a contrast between the physical input device and the physical hand depicted in the image is lower than a predetermined threshold, modifying the image to increase the contrast, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane, generating, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Application
    Filed: December 20, 2022
    Publication date: April 27, 2023
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Patent number: 11606364
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: March 14, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Patent number: 11582245
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: February 14, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20230037750
    Abstract: A method includes detecting an object of interest in a real environment and depth information of the object; determining one or more anchor locations in a three-dimensional space that correspond to a position of the object in the three-dimensional space; and generating a virtual surface anchored in the three-dimensional space. The method may further determine a pose of a camera when an image is captured and determine a region in the image that corresponds to the virtual surface. The method may further determine a first viewpoint of a first eye of the user; render a first output image based on (1) the first viewpoint relative to the virtual surface and (2) the image region corresponding to the virtual surface; and display the first output image on a first display of the computing device, the first display being configured to be viewed by the first eye of the user.
    Type: Application
    Filed: October 24, 2022
    Publication date: February 9, 2023
    Inventors: Alessia Marra, Gioacchino Noris, Panya Inversin
  • Patent number: 11537258
    Abstract: In one embodiment, a method includes a computer system accessing an image of a physical environment of a user, the image being associated with a perspective of the user and depicting a physical input device and a physical hand of the user, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane associated with the perspective of the user, generating, by applying the image mask to the image, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Grant
    Filed: October 16, 2020
    Date of Patent: December 27, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Patent number: 11538214
    Abstract: A method includes receiving, through a network by a computing system associated with an artificial-reality device, video data of a user of a second computing system comprising a first and second image of the user. The first and second images may be captured concurrently by a first camera and a second camera of the second computing system, respectively. The computing system generates a planar proxy for displaying the user and determines a pose for the planar proxy within a three-dimensional virtual environment. The computing system renders a left image for a left-eye display and a right image for a right-eye display of the artificial-reality device based on the planar proxy having the determined pose and the first image, and the planar proxy having the determined pose and the second image, respectively. The computing system displays the rendered left image and right image using the left-eye display and right-eye display, respectively.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: December 27, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alexander Sorkine Hornung, Panya Inversin
  • Patent number: 11481960
    Abstract: A method includes a computing system tracking motions performed by a hand of a user, determining one or more anchor locations in a three-dimensional space, and generating a virtual surface anchored in the three-dimensional space. An image of a real environment is captured using a camera worn by the user, and a pose of the camera when the image is captured is determined. The computing system determines a first viewpoint of a first eye of the user and a region in the image that, as viewed from the camera, corresponds to the virtual surface. The computing system renders an output image based on (1) the first viewpoint relative to the virtual surface and (2) the image region corresponding to the virtual surface, and displays the output image on a first display of the device, the first display being configured to be viewed by the first eye of the user.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: October 25, 2022
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Alessia Marra, Gioacchino Noris, Panya Inversin
  • Publication number: 20220207816
    Abstract: A method includes a computing system tracking motions performed by a hand of a user, determining one or more anchor locations in a three-dimensional space, and generating a virtual surface anchored in the three-dimensional space. An image of a real environment is captured using a camera worn by the user, and a pose of the camera when the image is captured is determined. The computing system determines a first viewpoint of a first eye of the user and a region in the image that, as viewed from the camera, corresponds to the virtual surface. The computing system renders an output image based on (1) the first viewpoint relative to the virtual surface and (2) the image region corresponding to the virtual surface, and displays the output image on a first display of the device, the first display being configured to be viewed by the first eye of the user.
    Type: Application
    Filed: December 30, 2020
    Publication date: June 30, 2022
    Inventors: Alessia Marra, Gioacchino Noris, Panya Inversin
  • Publication number: 20220172444
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: February 18, 2022
    Publication date: June 2, 2022
    Applicant: Facebook Technologies, LLC
    Inventors: Michael James LEBEAU, Manuel Ricardo FREIRE SANTOS, Aleksejs ANPILOGOVS, Alexander SORKINE HORNUNG, Björn WANBO, Connor TREACY, Fangwei LEE, Federico RUIZ, Jonathan MALLINSON, Jonathan Richard MAYOH, Marcus TANNER, Panya INVERSIN, Sarthak RAY, Sheng SHEN, William Arthur Hugh STEPTOE, Alessia MARRA, Gioacchino NORIS, Derrick READINGER, Jeffrey Wai-King LOCK, Jeffrey WITTHUHN, Jennifer Lynn SPURLOCK, Larissa Heike LAICH, Javier Alejandro Sierra SANTOS
  • Publication number: 20220148254
    Abstract: A method includes receiving, through a network by a computing system associated with an artificial-reality device, video data of a user of a second computing system comprising a first and second image of the user. The first and second images may be captured concurrently by a first camera and a second camera of the second computing system, respectively. The computing system generates a planar proxy for displaying the user and determines a pose for the planar proxy within a three-dimensional virtual environment. The computing system renders a left image for a left-eye display and a right image for a right-eye display of the artificial-reality device based on the planar proxy having the determined pose and the first image, and the planar proxy having the determined pose and the second image, respectively. The computing system displays the rendered left image and right image using the left-eye display and right-eye display, respectively.
    Type: Application
    Filed: November 9, 2020
    Publication date: May 12, 2022
    Inventors: Alexander Sorkine Hornung, Panya Inversin
  • Publication number: 20220121343
    Abstract: In one embodiment, a method includes a computer system accessing an image of a physical environment of a user, the image being associated with a perspective of the user and depicting a physical input device and a physical hand of the user, determining a pose of the physical input device, generating a three-dimensional model representing the physical hand of the user, generating an image mask by projecting the three-dimensional model onto an image plane associated with the perspective of the user, generating, by applying the image mask to the image, a cropped image depicting at least the physical hand of the user in the image, rendering, based on the perspective of the user and the pose of the physical input device, a virtual input device to represent the physical input device, and displaying the cropped image depicting at least the physical hand over the rendered virtual input device.
    Type: Application
    Filed: October 16, 2020
    Publication date: April 21, 2022
    Inventors: Adrian Brian Ratter, Alessia Marra, Yugeng He, Panya Inversin
  • Patent number: 11302085
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: April 12, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos
  • Publication number: 20220086205
    Abstract: Aspects of the present disclosure are directed to creating and administering artificial reality collaborative working environments and providing interaction modes for them. An XR work system can provide and control such artificial reality collaborative working environments to enable, for example, A) links between real-world surfaces and XR surfaces; B) links between multiple real-world areas to XR areas with dedicated functionality; C) maintaining access, while inside the artificial reality working environment, to real-world work tools such as the user's computer screen and keyboard; D) various hand and controller modes for different interaction and collaboration modalities; E) use-based, multi-desk collaborative room configurations; and F) context-based auto population of users and content items into the artificial reality working environment.
    Type: Application
    Filed: October 30, 2020
    Publication date: March 17, 2022
    Inventors: Michael James LeBeau, Manuel Ricardo Freire Santos, Aleksejs Anpilogovs, Alexander Sorkine Hornung, Bjorn Wanbo, Connor Treacy, Fangwei Lee, Federico Ruiz, Jonathan Mallinson, Jonathan Richard Mayoh, Marcus Tanner, Panya Inversin, Sarthak Ray, Sheng Shen, William Arthur Hugh Steptoe, Alessia Marra, Gioacchino Noris, Derrick Readinger, Jeffrey Wai-King Lock, Jeffrey Witthuhn, Jennifer Lynn Spurlock, Larissa Heike Laich, Javier Alejandro Sierra Santos