Patents by Inventor Zeyar Htet

Zeyar Htet has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240296590
    Abstract: In one embodiment, a method includes receiving a first viewpoint associated with a head-mounted device from the head-mounted device, accessing a 3D mesh of a virtual scene, selecting a portion of the 3D mesh based on the first viewpoint, generating an image and a corresponding depth map of the virtual scene based on the selected portion of the 3D mesh, generating a simplified 3D mesh based on the depth map, wherein the simplified 3D mesh has fewer primitives than the selected portion of 3D mesh of the virtual scene, generating a texture for the simplified 3D mesh based on the image, and sending the simplified 3D mesh and the texture to the head-mount device, wherein the simplified 3D mesh and the texture are configured to be used for rendering the virtual scene from one or more viewpoints different from the first viewpoint.
    Type: Application
    Filed: December 8, 2023
    Publication date: September 5, 2024
    Inventors: Volga Aksoy, Zeyar Htet, Reza Nourai
  • Publication number: 20240249440
    Abstract: Particular embodiments described herein present a technique for compressing a 3D mesh. A computing system may access a topology-coding list and a vertex list representing a 3D mesh. The vertex list may comprise X, Y, and Z coordinates for ordered vertices in the 3D mesh. The computing system may construct a predicted vertex list based on the vertex list. The computing system may generate X, Y, and Z coordinate bit streams. Each coordinate bit stream may comprise ordered coordinate values for a corresponding coordinate in the predicted vertex list. Each coordinate value in a coordinate bit stream may be represented in a corresponding number of bits. The corresponding number of bits may be stored in a memory-size list corresponding to the coordinate bit stream. The computing system may encode the topology-coding list and memory-size lists corresponding to the X, Y, and Z coordinate bit streams using Zstandard coder.
    Type: Application
    Filed: January 19, 2024
    Publication date: July 25, 2024
    Inventors: Zeyar Htet, Volga Aksoy, Binyamin Abramov
  • Publication number: 20230245375
    Abstract: In one embodiment, a method includes a step of receiving a geometric representation of a virtual object and a texture atlas, the geometric representation comprising multiple geometric primitives defining a shape of the virtual object, the texture atlas comprises regions each of which is allocated to include shading information of a respective geometric primitive of the plurality of geometric primitives, and the shading information of the respective geometric primitive being scaled down to be smaller than the allocated region so as to create a buffer between the allocated region and adjacent regions of the plurality of regions on the texture atlas. The method further includes steps of identifying, based on a first viewpoint from which to view the virtual object, visible geometric primitives from the plurality of geometric primitives and rendering images of the visible geometric primitives using corresponding shading information included in the texture atlas.
    Type: Application
    Filed: April 4, 2023
    Publication date: August 3, 2023
    Inventors: Reza Nourai, Volga Aksoy, Zeyar Htet
  • Patent number: 11676324
    Abstract: In one embodiment, a method includes the steps of receiving, from a client device, a first viewpoint from which to view a virtual object, the virtual object having a shape defined by multiple geometric primitives, identifying, relative to the first viewpoint, visible geometric primitives from multiple geometric primitives, allocating a region in a texture atlas for each of the visible geometric primitives, generating shading information for each of the visible geometric primitives, storing the shading information of each of the visible geometric primitives in a portion of the allocated region smaller than the allocated region to create a buffer around the portion of the allocated region where the shading information is stored, and sending, to the client device, the texture atlas and a list identifying the visible geometric primitives, the texture atlas being configured for rendering images of the visible geometric primitives from different viewpoints.
    Type: Grant
    Filed: March 30, 2021
    Date of Patent: June 13, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Reza Nourai, Volga Aksoy, Zeyar Htet
  • Patent number: 11544894
    Abstract: A method includes the steps of receiving training data comprising images of an object and associated camera poses from which the images are captured, training, based on the training data, a machine-learning model to take as input a given viewpoint and synthesize an image of a virtual representation of the object viewed from the given viewpoint, generating, for each of predetermined viewpoints surrounding the virtual representation of the object, a view-dependent image of the object as viewed from that viewpoint using the trained machine-learning model, receiving, from a client device, a desired viewpoint from which to view the virtual representation of the object, selecting one or more of the predetermined viewpoints based on the desired viewpoint, and sending, to the client device, the view-dependent images associated with the selected one or more viewpoints for rendering an output image of the virtual representation of the object viewed from the desired viewpoint.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: January 3, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Brian Funt, Reza Nourai, Volga Aksoy, Zeyar Htet
  • Publication number: 20220319094
    Abstract: In one embodiment, a method includes the steps of receiving, from a client device, a first viewpoint from which to view a virtual object, the virtual object having a shape defined by multiple geometric primitives, identifying, relative to the first viewpoint, visible geometric primitives from multiple geometric primitives, allocating a region in a texture atlas for each of the visible geometric primitives, generating shading information for each of the visible geometric primitives, storing the shading information of each of the visible geometric primitives in a portion of the allocated region smaller than the allocated region to create a buffer around the portion of the allocated region where the shading information is stored, and sending, to the client device, the texture atlas and a list identifying the visible geometric primitives, the texture atlas being configured for rendering images of the visible geometric primitives from different viewpoints.
    Type: Application
    Filed: March 30, 2021
    Publication date: October 6, 2022
    Inventors: Reza Nourai, Volga Aksoy, Zeyar Htet
  • Publication number: 20220277510
    Abstract: A method includes the steps of receiving training data comprising images of an object and associated camera poses from which the images are captured, training, based on the training data, a machine-learning model to take as input a given viewpoint and synthesize an image of a virtual representation of the object viewed from the given viewpoint, generating, for each of predetermined viewpoints surrounding the virtual representation of the object, a view-dependent image of the object as viewed from that viewpoint using the trained machine-learning model, receiving, from a client device, a desired viewpoint from which to view the virtual representation of the object, selecting one or more of the predetermined viewpoints based on the desired viewpoint, and sending, to the client device, the view-dependent images associated with the selected one or more viewpoints for rendering an output image of the virtual representation of the object viewed from the desired viewpoint.
    Type: Application
    Filed: February 26, 2021
    Publication date: September 1, 2022
    Inventors: Brian Funt, Reza Nourai, Volga Aksoy, Zeyar Htet
  • Publication number: 20220139026
    Abstract: In one embodiment, a method includes the steps of generating, for a virtual object defined by a geometric representation, multiple viewpoints surrounding the virtual object, generating, for each of the multiple viewpoints, a simplified geometric representation of the virtual object based on the viewpoint, wherein the simplified geometric representation has a lower resolution than the geometric representation of the virtual object, receiving, from a client device, a desired viewpoint from which to view the virtual object, selecting one or more viewpoints from the multiple viewpoints based on the desired viewpoint, and sending, to the client device, rendering data including the simplified geometric representation and an associated view-dependent texture that are associated with each of the selected one or more viewpoints, the rendering data being configured for rendering an image of the virtual object from the desired viewpoint.
    Type: Application
    Filed: November 5, 2020
    Publication date: May 5, 2022
    Inventors: Reza Nourai, Volga Aksoy, Zeyar Htet
  • Patent number: 10469873
    Abstract: A virtual reality or augmented reality experience of a scene may be decoded for playback for a viewer through a combination of CPU and GPU processing. A video stream may be retrieved from a data store. A first viewer position and/or orientation may be received from an input device, such as the sensor package on a head-mounted display (HMD). At a processor, the video stream may be partially decoded to generate a partially-decoded bitstream. At a graphics processor, the partially-decoded bitstream may be further decoded to generate viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The viewpoint video may be displayed on a display device, such as screen of the HMD.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: November 5, 2019
    Assignee: Google LLC
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley, Zeyar Htet
  • Publication number: 20180035134
    Abstract: A virtual reality or augmented reality experience of a scene may be decoded for playback for a viewer through a combination of CPU and GPU processing. A video stream may be retrieved from a data store. A first viewer position and/or orientation may be received from an input device, such as the sensor package on a head-mounted display (HMD). At a processor, the video stream may be partially decoded to generate a partially-decoded bitstream. At a graphics processor, the partially-decoded bitstream may be further decoded to generate viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The viewpoint video may be displayed on a display device, such as screen of the HMD.
    Type: Application
    Filed: October 11, 2017
    Publication date: February 1, 2018
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley, Zeyar Htet