Patents by Inventor Dhritiman Sagar
Dhritiman Sagar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11961189Abstract: The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.Type: GrantFiled: May 5, 2023Date of Patent: April 16, 2024Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11961196Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.Type: GrantFiled: March 17, 2023Date of Patent: April 16, 2024Assignee: Snap Inc.Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
-
Publication number: 20240048678Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11886633Abstract: Systems, devices, media, and methods are presented for displaying a virtual object on the display of a portable eyewear device using motion data gathered by a face-tracking application on a mobile device. A controller engine leverages the processing power of a mobile device to locate the face supporting the eyewear, locate the hand holding the mobile device, acquire the motion data, and calculate an apparent path of the virtual object. The virtual object is displayed in a series of locations along the apparent path, based on both the course traveled by the mobile device (in the hand) and the track traveled by the eyewear device (on the face), so that the virtual object is persistently viewable to the user.Type: GrantFiled: February 23, 2023Date of Patent: January 30, 2024Assignee: Snap Inc.Inventors: Ilteris Canberk, Dhritiman Sagar, Mathieu Emmanuel Vignau
-
Publication number: 20230410450Abstract: The subject technology applies, to image data and depth data, a 3D effect including at least one beautification operation based on an augmented reality content generator, the 3D effect including a beautification operation, the beautification operation comprising modifying image data, the image data including a region corresponding to a representation of a face, the beautification operation comprising using a machine learning model for at least one of smoothing blemishes or preserving facial skin texture. The subject technology generates a depth map using at least the depth data. The subject technology generates a segmentation mask based at least on the image data. The subject technology performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation.Type: ApplicationFiled: August 31, 2023Publication date: December 21, 2023Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20230386157Abstract: The subject technology applies a three-dimensional (3D) effect to image data and depth data based at least in part on an augmented reality content generator. The subject technology generates a segmentation mask based at least on the image data. The subject technology performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a packed depth map based at least in part on the a depth map of the depth data. The subject technology generates, using the processor, a message including information related to the applied 3D effect, the image data, and the depth data.Type: ApplicationFiled: August 15, 2023Publication date: November 30, 2023Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20230386112Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.Type: ApplicationFiled: August 15, 2023Publication date: November 30, 2023Inventors: Ebony James Charlton, Sumant Hanumante, Zhou Ren, Dhritiman Sagar
-
Patent number: 11825065Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.Type: GrantFiled: September 22, 2022Date of Patent: November 21, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11803992Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.Type: GrantFiled: June 22, 2021Date of Patent: October 31, 2023Assignee: Snap Inc.Inventors: Ebony James Charlton, Sumant Hanumante, Zhou Ren, Dhritiman Sagar
-
Publication number: 20230343046Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.Type: ApplicationFiled: June 30, 2023Publication date: October 26, 2023Inventors: Ilteris Canberk, Sumant Milind Hanumante, Stanislav Minakov, Dhritiman Sagar
-
Patent number: 11776233Abstract: The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.Type: GrantFiled: August 1, 2022Date of Patent: October 3, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20230290027Abstract: A content display system can control which content and how the content is displayed based on viewing parameters, such as a map zoom level, and physical distance parameters, e.g., a geo-fence distance and an icon visibility distance. Different combinations of input (e.g., zoom level and physical distances) yield a myriad of pre-set content displays on the client device, thereby allowing a creator of an icon to finely tune how content displayed otherwise accessed.Type: ApplicationFiled: May 19, 2023Publication date: September 14, 2023Inventors: Ebony James Charlton, Dhritiman Sagar, Daniel Vincent Grippi
-
Publication number: 20230281930Abstract: The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.Type: ApplicationFiled: May 5, 2023Publication date: September 7, 2023Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11748957Abstract: The subject technology receives image data and depth data. The subject technology selects an augmented reality content generator corresponding to a three-dimensional (3D) effect. The subject technology applies the 3D effect to the image data and the depth data based at least in part on the selected augmented reality content generator. The subject technology generates, using a processor, a message including information related to the applied 3D effect, the image data, and the depth data.Type: GrantFiled: November 12, 2021Date of Patent: September 5, 2023Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11750682Abstract: An example method comprises: receiving, at a server from a first client device, a request for access to a client feature on the first client device; determining, by the server, an applicable rule for the access request, the applicable rule having a plurality of nodes; determining, by the server, device capabilities needed for the determined rule; determining, by the server, nodes that can be executed and nodes that cannot be executed, based on the device capabilities, the nodes that can be executed including device hardware capabilities and the nodes that cannot be executed including real-time device capabilities; executing, by the server nodes that can be executed to reach a partial decision for the applicable rule; pruning the applicable rule to remove executed nodes and generate a pruned rule that includes nodes that cannot be executed; transmitting the pruned rule and partial decision to the first client device.Type: GrantFiled: January 24, 2022Date of Patent: September 5, 2023Assignee: Snap Inc.Inventors: Michael Ronald Cieslak, Jiayao Yu, Kai Chen, Farnaz Azmoodeh, Michael David Marr, Jun Huang, Zahra Ferdowsi, Dhritiman Sagar
-
Publication number: 20230275953Abstract: An example method comprises: receiving, at a server from a first client device, a request for access to a client feature on the first client device; determining, by the server, an applicable rule for the access request, the applicable rule having a plurality of nodes; determining, by the server, device capabilities needed for the determined rule; determining, by the server, nodes that can be executed and nodes that cannot be executed, based on the device capabilities, the nodes that can be executed including device hardware capabilities and the nodes that cannot be executed including real-time device capabilities; executing, by the server nodes that can be executed to reach a partial decision for the applicable rule; pruning the applicable rule to remove executed nodes and generate a pruned rule that includes nodes that cannot be executed; transmitting the pruned rule and partial decision to the first client device.Type: ApplicationFiled: May 4, 2023Publication date: August 31, 2023Inventors: Michael Ronald Cieslak, Jiayao Yu, Kai Chen, Farnaz Azmoodeh, Michael David Marr, Jun Huang, Zahra Ferdowsi, Dhritiman Sagar
-
Patent number: 11741679Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.Type: GrantFiled: September 27, 2022Date of Patent: August 29, 2023Assignee: Snap Inc.Inventors: Ilteris Canberk, Sumant Hanumante, Dhritiman Sagar, Stanislav Minakov
-
Patent number: 11715223Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: GrantFiled: January 31, 2022Date of Patent: August 1, 2023Assignee: Snap Inc.Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Publication number: 20230222745Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.Type: ApplicationFiled: March 17, 2023Publication date: July 13, 2023Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
-
Publication number: 20230214639Abstract: Techniques for training a neural network having a plurality of computational layers with associated weights and activations for computational layers in fixed-point formats include determining an optimal fractional length for weights and activations for the computational layers; training a learned clipping-level with fixed-point quantization using a PACT process for the computational layers; and quantizing on effective weights that fuses a weight of a convolution layer with a weight and running variance from a batch normalization layer. A fractional length for weights of the computational layers is determined from current values of weights using the determined optimal fractional length for the weights of the computational layers. A fixed-point activation between adjacent computational layers is related using PACT quantization of the clipping-level and an activation fractional length from a node in a following computational layer.Type: ApplicationFiled: December 31, 2021Publication date: July 6, 2023Inventors: Sumant Milind Hanumante, Qing Jin, Sergei Korolev, Denys Makoviichuk, Jian Ren, Dhritiman Sagar, Patrick Timothy McSweeney Simons, Sergey Tulyakov, Yang Wen, Richard Zhuang