Patents by Inventor Maxim Maximov Lazarov
Maxim Maximov Lazarov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961189Abstract: The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.Type: GrantFiled: May 5, 2023Date of Patent: April 16, 2024Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11949527Abstract: Methods and systems are disclosed for performing operations for providing a shared augmented reality experience in a video chat. A video chat can be established between a plurality of client devices. During the video chat, videos of users associated with the client devices can be displayed. During the video chat, a request from a first client device to activate a first AR experience can be received, and in response, and body parts of users depicted in the videos are modified to include one or more AR elements associated with the first AR experience.Type: GrantFiled: April 25, 2022Date of Patent: April 2, 2024Assignee: SNAP INC.Inventors: Nathan Richard Banks, Nathan Kenneth Boyd, Amanda Durham, Alex Edelsburg, Maxim Maximov Lazarov, Ryan Thomas
-
Patent number: 11948266Abstract: The subject technology detects a first gesture and a second gesture, each gesture corresponding to an open trigger finger gesture. The subject technology detects a third gesture and a fourth gesture, each gesture corresponding to a closed trigger finger gesture. The subject technology, selects a first virtual object in a first scene. The subject technology detects a first location and a first position of a first representation of a first finger from the third gesture and a second location and a second position of a second representation of a second finger from the fourth gesture. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies a set of dimensions of the first virtual object to a different set of dimensions.Type: GrantFiled: September 9, 2022Date of Patent: April 2, 2024Assignee: SNAP INC.Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087245Abstract: The subject technology detects a first location and a first position of a first representation of a first finger and a second location and a second position of a second representation of a second finger. The subject technology detects a first particular location and a first particular position of a first particular representation of a first particular finger and a second particular location and a second particular position of a second particular representation of a second particular finger. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology detects a first particular change in the first particular location and the first particular position and a second particular change in the second particular location and the second particular position. The subject technology generates a set of virtual objects.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087243Abstract: The subject technology receives a set of frames. The subject technology detect a first gesture correspond to an open trigger finger gesture. The subject technology receives a second set of frames. The subject technology detects from the second set of frames, a second gesture correspond to a closed trigger finger gesture. The subject technology detects a location and a position of a representation of a finger from the closed trigger finger gesture. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology renders a movement of the first virtual object along a vector away from the location and the position of the representation of the finger within a first scene. The subject technology provides for display the rendered movement of the first virtual object along the vector within the first scene.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087609Abstract: The subject technology receives frames of a source media content. The subject technology detects from the frames of the source media content, a first gesture indicating a cut point at a particular frame of the source media content, the cut point associated with a trimming operation to be performed on the source media content. The subject technology selects a starting frame and an ending frame from the frames based at least in part on the cut point at the particular frame. The subject technology performs the trimming operation based on the starting frame and the ending frame. The subject technology generates a second media content using the third set of frames. The subject technology provides for display at least a portion of the third set of frames of the second media content.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087246Abstract: The subject technology detects a first gesture corresponding to an open trigger finger gesture. The subject technology detects a location and a position of a representation of a finger from the open trigger finger gesture. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology detects a first collision event. The subject technology detects a second gesture corresponding to a closed trigger finger gesture. The subject technology selects the second virtual object. The subject technology renders the first virtual object as attached to the second virtual object in response to the selecting. The subject technology provides for display the rendered first virtual object as attached to the second virtual object within a first scene.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087244Abstract: The subject technology detects a location and a position of a representation of a finger in a set of frames captured by a camera of a client device. The subject technology generates a first virtual object based at least in part on the location and the position of the representation of the finger. The subject technology renders the first virtual object within a first scene. The subject technology detects a first collision event corresponding to a first collider of the first virtual object intersecting with a second collider of a second virtual object. The subject technology modifies a set of dimensions of the second virtual object to a second set of dimensions. The subject technology renders the second virtual object based on the second set of dimensions within a second scene. The subject technology provides for display the rendered second virtual object within the second scene.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087242Abstract: The subject technology detects a location and a position of a representation of a finger. The subject technology generates a first virtual object based on the location and the position of the representation of the finger. The subject technology detects a first collision event. The subject technology in response to the first collision event, modifies a set of dimensions of the second virtual object to a second set of dimensions. The subject technology detects a second location and a second position of the representation of the finger. The subject technology detects a second collision event. The subject technology modifies a set of dimensions of the third virtual object to a third set of dimensions. The subject technology renders the third virtual object based on the third set of dimensions within a third scene, the third scene comprising a modified scene from a second scene.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Publication number: 20240087264Abstract: The subject technology detects a first gesture and a second gesture, each gesture corresponding to an open trigger finger gesture. The subject technology detects a third gesture and a fourth gesture, each gesture corresponding to a closed trigger finger gesture. The subject technology, selects a first virtual object in a first scene. The subject technology detects a first location and a first position of a first representation of a first finger from the third gesture and a second location and a second position of a second representation of a second finger from the fourth gesture. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies a set of dimensions of the first virtual object to a different set of dimensions.Type: ApplicationFiled: September 9, 2022Publication date: March 14, 2024Inventors: Kyle Goodrich, Maxim Maximov Lazarov, Andrew James McPhee, Daniel Moreno
-
Patent number: 11908093Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by a messaging application, a video feed from a camera of a user device that depicts a face; receiving a request to add a 3D caption to the video feed; identifying a graphical element that is associated with context of the 3D caption; and displaying the 3D caption and the identified graphical element in the video feed at a position in 3D space of the video feed proximate to the face depicted in the video feed.Type: GrantFiled: March 15, 2023Date of Patent: February 20, 2024Assignee: SNAP INC.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Patent number: 11908082Abstract: Systems and methods are provided for determining a location of a selection in a space viewable in a camera view on a display of a computing device, detecting movement of the computing device, and generating a path based on the location of the selection and the movement of the computing device. The systems and methods further provide for generating a three-dimensional (3D) mesh along the path, populating the 3D mesh with selected options to generate a 3D paint object, and causing the generated 3D paint object to be displayed. The systems and methods further provide for receiving a request to send a message comprising an image or video overlaid by the 3D paint object, capturing the image or video overlaid by the displayed 3D paint object, generating the message comprising the image or video overlaid by the 3D paint object, and sending the message to another computing device.Type: GrantFiled: February 15, 2023Date of Patent: February 20, 2024Assignee: Snap Inc.Inventors: Piers George Cowburn, Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, David Li, Tony Mathew, Andrew James McPhee, Daniel Moreno, Isac Andreas Müller Sandvik, Wentao Shang
-
Publication number: 20240048678Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20240037878Abstract: Systems and methods are provided for capturing by a camera of a user device, a first image depicting a first environment of the user device; overlaying a first virtual object on a portion of the first image depicting the first environment; modifying a surface of the first virtual object using content captured by the user device; storing a second virtual object comprising the first virtual object with the modified surface; and generating for display the second virtual object on a portion of a second image depicting a second environment.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Samuel Edward Hare, Andrew James McPhee, Maxim Maximov Lazarov, Wentao Shang, Kyle Goodrich, Tony Mathew
-
Publication number: 20240029373Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and method for performing operations comprising: receiving, by one or more processors that implement a messaging application, a video feed from a camera of a user device; detecting, by the messaging application, a face in the video feed; in response to detecting the face in the video feed, retrieving a three-dimensional (3D) caption; modifying the video feed to include the 3D caption at a position in 3D space of the video feed proximate to the face; and displaying a modified video feed that includes the face and the 3D caption.Type: ApplicationFiled: October 2, 2023Publication date: January 25, 2024Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Publication number: 20230410450Abstract: The subject technology applies, to image data and depth data, a 3D effect including at least one beautification operation based on an augmented reality content generator, the 3D effect including a beautification operation, the beautification operation comprising modifying image data, the image data including a region corresponding to a representation of a face, the beautification operation comprising using a machine learning model for at least one of smoothing blemishes or preserving facial skin texture. The subject technology generates a depth map using at least the depth data. The subject technology generates a segmentation mask based at least on the image data. The subject technology performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation.Type: ApplicationFiled: August 31, 2023Publication date: December 21, 2023Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20230410442Abstract: The subject technology generates a segmentation mask based on first image data. The subject technology applies the segmentation mask on first depth data to reduce a set of artifacts in a depth map based on the first depth data. The subject technology generates a packed depth map based at least in part on the depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels. The subject technology applies, to the first image data and the first depth data, a first augmented reality content generator corresponding to a selected first selectable graphical item, the first image data and the first depth data being captured with a camera. The subject technology generates a message including the applied first augmented reality content generator to the first image data and the first depth data.Type: ApplicationFiled: August 31, 2023Publication date: December 21, 2023Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Wentao Shang
-
Publication number: 20230386157Abstract: The subject technology applies a three-dimensional (3D) effect to image data and depth data based at least in part on an augmented reality content generator. The subject technology generates a segmentation mask based at least on the image data. The subject technology performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a packed depth map based at least in part on the a depth map of the depth data. The subject technology generates, using the processor, a message including information related to the applied 3D effect, the image data, and the depth data.Type: ApplicationFiled: August 15, 2023Publication date: November 30, 2023Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11823341Abstract: Systems and methods are provided for capturing by a camera of a user device, a first image depicting a first environment of the user device; overlaying a first virtual object on a portion of the first image depicting the first environment; modifying a surface of the first virtual object using content captured by the user device; storing a second virtual object comprising the first virtual object with the modified surface; and generating for display the second virtual object on a portion of a second image depicting a second environment.Type: GrantFiled: August 4, 2022Date of Patent: November 21, 2023Assignee: Snap Inc.Inventors: Samuel Edward Hare, Andrew James McPhee, Maxim Maximov Lazarov, Wentao Shang, Kyle Goodrich, Tony Mathew
-
Patent number: 11825065Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.Type: GrantFiled: September 22, 2022Date of Patent: November 21, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang