Patents by Inventor Riza Alp Guler
Riza Alp Guler has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240087266Abstract: Methods and systems are disclosed for performing real-time deforming operations. The system receives an image that includes a depiction of a real-world object. The system applies a machine learning model to the image to generate a warping field and segmentation mask, the machine learning model trained to establish a relationship between a plurality of training images depicting real-world objects and corresponding ground-truth warping fields and segmentation masks associated with a target shape. The system applies the generated warping field and segmentation mask to the image to warp the real-world object depicted in the image to the target shape.Type: ApplicationFiled: October 25, 2022Publication date: March 14, 2024Inventors: Riza Alp Guler, Himmy Tam, Haoyang Wang, Antonios Kakolyris
-
Patent number: 11915365Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving a plurality of bone scale coefficients each corresponding to respective bones of a skeleton model; receiving a plurality of joint angle coefficients that collectively define a pose for the skeleton model; generating the skeleton model based on the received bone scale coefficients and the received joint angle coefficients; generating a base surface based on the plurality of bone scale coefficients; generating an identity surface by deformation of the base surface; and generating the 3D body model by mapping the identity surface onto the posed skeleton model.Type: GrantFiled: November 13, 2020Date of Patent: February 27, 2024Assignee: Snap Inc.Inventors: Riza Alp Guler, Haoyang Wang, Iason Kokkinos, Stefanos Zafeiriou
-
Publication number: 20240046587Abstract: Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; determining UV positions of the real-world object depicted in the video to obtain pixel values associated with the UV positions; generating an external mesh and associated augmented reality (AR) element representing the real-world object based on the pixel values associated with the UV positions; deforming the external mesh based on changes to the 3D body mesh and a deformation parameter; and modifying the video to replace the real-world object with the AR element based on the deformed external mesh.Type: ApplicationFiled: October 23, 2023Publication date: February 8, 2024Inventors: Matan Zohar, Yanli Zhao, Brian Fulkerson, Riza Alp Guler
-
Publication number: 20240029280Abstract: This disclosure relates to reconstructing three-dimensional models of objects from two-dimensional images. According to a first aspect, this specification describes a computer implemented method for creating a three-dimensional reconstruction from a two-dimensional image, the method comprising: receiving a two-dimensional image; identifying an object in the image to be reconstructed and identifying a type of said object; spatially anchoring a pre-determined set of object landmarks within the image; extracting a two-dimensional image representation from each object landmark; estimating a respective three-dimensional representation for the respective two-dimensional image representations; and combining the respective three-dimensional representations resulting in a fused three-dimensional representation of the object.Type: ApplicationFiled: September 28, 2023Publication date: January 25, 2024Inventors: Riza Alp Guler, Iason Kokkinos
-
Publication number: 20240013463Abstract: Aspects of the present disclosure involve a system for providing virtual experiences. The system accesses, by a messaging application, an image depicting a person. The system generates, by the messaging application, a three-dimensional (3D) avatar based on the person depicted in the image. The system receives input that selects a pose for the 3D avatar and one or more fashion items to be worn by the 3D avatar and places, by the messaging application, the 3D avatar in the selected pose and wearing the one or more fashion items in an augmented reality (AR) experience.Type: ApplicationFiled: August 16, 2022Publication date: January 11, 2024Inventors: Avihay Assouline, Itamar Berger, Riza Alp Guler, Antonios Kakolyris, Frank Lu, Haoyang Wang, Matan Zohar
-
Patent number: 11836866Abstract: Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; determining UV positions of the real-world object depicted in the video to obtain pixel values associated with the UV positions; generating an external mesh and associated augmented reality (AR) element representing the real-world object based on the pixel values associated with the UV positions; deforming the external mesh based on changes to the 3D body mesh and a deformation parameter; and modifying the video to replace the real-world object with the AR element based on the deformed external mesh.Type: GrantFiled: September 20, 2021Date of Patent: December 5, 2023Assignee: Snap Inc.Inventors: Matan Zohar, Yanli Zhao, Brian Fulkerson, Riza Alp Guler
-
Patent number: 11816850Abstract: This disclosure relates to reconstructing three-dimensional models of objects from two-dimensional images. According to a first aspect, this specification describes a computer implemented method for creating a three-dimensional reconstruction from a two-dimensional image, the method comprising: receiving a two-dimensional image; identifying an object in the image to be reconstructed and identifying a type of said object; spatially anchoring a pre-determined set of object landmarks within the image; extracting a two-dimensional image representation from each object landmark; estimating a respective three-dimensional representation for the respective two-dimensional image representations; and combining the respective three-dimensional representations resulting in a fused three-dimensional representation of the object.Type: GrantFiled: April 19, 2021Date of Patent: November 14, 2023Assignee: Snap Inc.Inventors: Riza Alp Guler, Iason Kokkinos
-
Publication number: 20230316665Abstract: Methods and systems are disclosed for performing operations for applying augmented reality elements to a person depicted in an image. The operations include receiving an image that includes data representing a depiction of a person; generating a segmentation of the data representing the person depicted in the image; extracting a portion of the image corresponding to the segmentation of the data representing the person depicted in the image; applying a machine learning model to the portion of the image to predict a surface normal tensor for the data representing the depiction of the person, the surface normal tensor representing surface normals of each pixel within the portion of the image; and applying one or more augmented reality (AR) elements to the image based on the surface normal tensor.Type: ApplicationFiled: June 16, 2022Publication date: October 5, 2023Inventors: Madiyar Aitbayev, Brian Fulkerson, Riza Alp Guler, Georgios Papandreou, Himmy Tam
-
Publication number: 20230316666Abstract: Methods and systems are disclosed for performing operations for applying augmented reality elements to a person depicted in an image. The operations include receiving an image that includes data representing a depiction of a person; extracting a portion of the image; applying a first machine learning model stage to the portion to predict a depth of a point of interest for the data representing the depiction of the person; applying a second machine learning model stage to the portion of the image to predict a relative depth of each pixel in the portion of the image to the predicted depth of the point of interest; generating dense depth reconstruction of the data representing the depiction of the person based on outputs of the first and second stages of the machine learning model; and applying one or more AR elements to the image based on the dense depth reconstruction.Type: ApplicationFiled: June 16, 2022Publication date: October 5, 2023Inventors: Madiyar Aitbayev, Brian Fulkerson, Riza Alp Guler, Georgios Papandreou, Himmy Tam
-
Publication number: 20230090645Abstract: Methods and systems are disclosed for performing operations comprising: receiving a video that includes a depiction of a real-world object; generating a three-dimensional (3D) body mesh associated with the real-world object that tracks movement of the real-world object across frames of the video; determining UV positions of the real-world object depicted in the video to obtain pixel values associated with the UV positions; generating an external mesh and associated augmented reality (AR) element representing the real-world object based on the pixel values associated with the UV positions; deforming the external mesh based on changes to the 3D body mesh and a deformation parameter; and modifying the video to replace the real-world object with the AR element based on the deformed external mesh.Type: ApplicationFiled: September 20, 2021Publication date: March 23, 2023Inventors: Matan Zohar, Yanli Zhao, Brian Fulkerson, Riza Alp Guler
-
Publication number: 20230070008Abstract: This specification discloses methods and systems for generating three-dimensional models of deformable objects from two-dimensional images. According to one aspect of this disclosure, there is described a computer implemented method for generating a three dimensional model of deformable object from a two-dimensional image. The method comprises: receiving, as input to an embedding neural network, the two-dimensional image, wherein the two dimensional image comprises an image of an object; generating, using the embedding neural network, an embedded representation of a two-dimensional image; inputting the embedded representation into a learned decoder model; and generating, using the learned decoder model, parameters of the three dimensional model of the object from the embedded representation.Type: ApplicationFiled: February 17, 2020Publication date: March 9, 2023Inventors: Dominik Kulon, Riza Alp Guler, lason Kokkinos, Stefanos Zafeiriou
-
Publication number: 20220375247Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving a two-dimensional continuous surface representation of a three-dimensional object, the continuous surface comprising a plurality of landmark locations; determining a first set of soft membership functions based on a relative location of points in the two-dimensional continuous surface representation and the landmark locations; receiving a two-dimensional input image, the input image comprising an image of the object; extracting a plurality of features from the input image using a feature recognition model; generating an encoded.Type: ApplicationFiled: July 15, 2022Publication date: November 24, 2022Inventors: Iason Kokkinos, Georgios Papandreou, Riza Alp Guler
-
Publication number: 20220358770Abstract: This specification relates to reconstructing three-dimensional (3D) scenes from two-dimensional (2D) images using a neural network. According to a first aspect of this specification, there is described a method for creating a three-dimensional reconstruction of a scene with multiple objects from a single two-dimensional image, the method comprising: receiving a single two-dimensional image; identifying all objects in the image to be reconstructed and identifying the type of said objects; estimating a three-dimensional representation of each identified object; estimating a three-dimensional plane physically supporting all three-dimensional objects; and positioning all three-dimensional objects in space relative to the supporting plane.Type: ApplicationFiled: June 17, 2020Publication date: November 10, 2022Inventors: Riza Alp Guler, Georgios Papandreou, Iason Kokkinos
-
Patent number: 11430247Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving a two-dimensional continuous surface representation of a three-dimensional object, the continuous surface comprising a plurality of landmark locations; determining a first set of soft membership functions based on a relative location of points in the two-dimensional continuous surface representation and the landmark locations; receiving a two-dimensional input image, the input image comprising an image of the object; extracting a plurality of features from the input image using a feature recognition model; generating an encoded feature representation of the extracted features using the first set of soft membership functions; generating a dense feature representation of the extracted features from the encoded representation using a second set of soft membership functions; and processing the second set of soft membership functions and dense feature representation using a neural image decoder model toType: GrantFiled: November 13, 2020Date of Patent: August 30, 2022Assignee: Snap Inc.Inventors: Iason Kokkinos, Georgios Papandreou, Riza Alp Guler
-
Publication number: 20210241522Abstract: This disclosure relates to reconstructing three-dimensional models of objects from two-dimensional images. According to a first aspect, this specification describes a computer implemented method for creating a three-dimensional reconstruction from a two-dimensional image, the method comprising: receiving a two-dimensional image; identifying an object in the image to be reconstructed and identifying a type of said object; spatially anchoring a pre-determined set of object landmarks within the image; extracting a two-dimensional image representation from each object landmark; estimating a respective three-dimensional representation for the respective two-dimensional image representations; and combining the respective three-dimensional representations resulting in a fused three-dimensional representation of the object.Type: ApplicationFiled: April 19, 2021Publication date: August 5, 2021Inventors: Riza Alp Guler, Iason Kokkinos
-
Publication number: 20210150806Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving a plurality of bone scale coefficients each corresponding to respective bones of a skeleton model; receiving a plurality of joint angle coefficients that collectively define a pose for the skeleton model; generating the skeleton model based on the received bone scale coefficients and the received joint angle coefficients; generating a base surface based on the plurality of bone scale coefficients; generating an identity surface by deformation of the base surface; and generating the 3D body model by mapping the identity surface onto the posed skeleton model.Type: ApplicationFiled: November 13, 2020Publication date: May 20, 2021Inventors: Riza Alp Guler, Haoyang Wang, Iason Kokkinos, Stefanos Zafeiriou
-
Publication number: 20210150197Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving a two-dimensional continuous surface representation of a three-dimensional object, the continuous surface comprising a plurality of landmark locations; determining a first set of soft membership functions based on a relative location of points in the two-dimensional continuous surface representation and the landmark locations; receiving a two-dimensional input image, the input image comprising an image of the object; extracting a plurality of features from the input image using a feature recognition model; generating an encoded feature representation of the extracted features using the first set of soft membership functions; generating a dense feature representation of the extracted features from the encoded representation using a second set of soft membership functions; and processing the second set of soft membership functions and dense feature representation using a neural image decoder model toType: ApplicationFiled: November 13, 2020Publication date: May 20, 2021Inventors: Iason Kokkinos, Georgios Papandreou, Riza Alp Guler