Patents by Inventor Dhritiman Sagar

Dhritiman Sagar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230195215
    Abstract: Systems, devices, media, and methods are presented for displaying a virtual object on the display of a portable eyewear device using motion data gathered by a face-tracking application on a mobile device. A controller engine leverages the processing power of a mobile device to locate the face supporting the eyewear, locate the hand holding the mobile device, acquire the motion data, and calculate an apparent path of the virtual object. The virtual object is displayed in a series of locations along the apparent path, based on both the course traveled by the mobile device (in the hand) and the track traveled by the eyewear device (on the face), so that the virtual object is persistently viewable to the user.
    Type: Application
    Filed: February 23, 2023
    Publication date: June 22, 2023
    Inventors: Ilteris Canberk, Dhritiman Sagar, Mathieu Emmanuel Vignau
  • Patent number: 11676342
    Abstract: The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.
    Type: Grant
    Filed: September 23, 2022
    Date of Patent: June 13, 2023
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Patent number: 11670025
    Abstract: A content display system can control which content and how the content is displayed based on viewing parameters, such as a map zoom level, and physical distance parameters, e.g., a geo-fence distance and an icon visibility distance. Different combinations of input (e.g., zoom level and physical distances) yield a myriad of pre-set content displays on the client device, thereby allowing a creator of an icon to finely tune how content displayed otherwise accessed.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: June 6, 2023
    Assignee: SNAP INC.
    Inventors: Ebony James Charlton, Dhritiman Sagar, Daniel Vincent Grippi
  • Patent number: 11670057
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: June 6, 2023
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Patent number: 11631222
    Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: April 18, 2023
    Assignee: Snap Inc.
    Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
  • Patent number: 11604508
    Abstract: Systems, devices, media, and methods are presented for displaying a virtual object on the display of a portable eyewear device using motion data gathered by a face-tracking application on a mobile device. A controller engine leverages the processing power of a mobile device to locate the face supporting the eyewear, locate the hand holding the mobile device, acquire the motion data, and calculate an apparent path of the virtual object. The virtual object is displayed in a series of locations along the apparent path, based on both the course traveled by the mobile device (in the hand) and the track traveled by the eyewear device (on the face), so that the virtual object is persistently viewable to the user.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: March 14, 2023
    Assignee: Snap Inc.
    Inventors: Ilteris Canberk, Dhritiman Sagar, Mathieu Emmanuel Vignau
  • Publication number: 20230015522
    Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.
    Type: Application
    Filed: September 27, 2022
    Publication date: January 19, 2023
    Inventors: Ilteris Canberk, Sumant Hanumante, Dhritiman Sagar, Stanislav Minakov
  • Publication number: 20230017627
    Abstract: The subject technology generates depth data using a machine learning model based at least in part on captured image data from at least one camera of a client device. The subject technology applies, to the captured image data and the generated depth data, a 3D effect based at least in part on an augmented reality content generator. The subject technology generates a depth map using at least the depth data. The subject technology generates a packed depth map based at least in part on the depth map, the generating the packed depth map. The subject technology converts a single channel floating point texture to a raw depth map. The subject technology generates multiple channels based at least in part on the raw depth map. The subject technology generates a segmentation mask based at least on the captured image data. The subject technology performs background inpainting and blurring of the captured image data using at least the segmentation mask to generate background inpainted image data.
    Type: Application
    Filed: September 23, 2022
    Publication date: January 19, 2023
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Publication number: 20230015194
    Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology applies, to image data and depth data, the 3D effect based at least in part on the augmented reality content generator, the applying the 3D effect. The subject technology generates a depth map using at least the depth data, generates a segmentation mask based at least on the image data, and performs background inpainting and blurring of the image data using at least the segmentation mask to generate background inpainted image data. The subject technology generates a 3D message based at least in part on the applied 3D effect.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Publication number: 20230005223
    Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.
    Type: Application
    Filed: September 14, 2022
    Publication date: January 5, 2023
    Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
  • Publication number: 20230005233
    Abstract: The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.
    Type: Application
    Filed: August 1, 2022
    Publication date: January 5, 2023
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Publication number: 20220373796
    Abstract: Augmented reality experiences of a user wearing an electronic eyewear device are captured by at least one camera on a frame of the electronic eyewear device, the at least one camera having a field of view that is larger than a field of view of a display of the electronic eyewear device. An augmented reality feature or object is applied to the captured scene. A photo or video of the augmented reality scene is captured and a first portion of the captured photo or video is displayed in the display. The display is adjusted to display a second portion of the captured photo or video with the augmented reality features as the user moves the user's head to view the second portion of the captured photo or video. The captured photo or video may be transferred to another device for viewing the larger field of view augmented reality image.
    Type: Application
    Filed: May 16, 2022
    Publication date: November 24, 2022
    Inventors: David Meisenholder, Dhritiman Sagar, Ilteris Canberk, Justin Wilder, Sumant Milind Hanumante, James Powderly
  • Patent number: 11508130
    Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.
    Type: Grant
    Filed: June 13, 2020
    Date of Patent: November 22, 2022
    Assignee: Snap Inc.
    Inventors: Ilteris Canberk, Sumant Hanumante, Dhritiman Sagar, Stanislav Minakov
  • Patent number: 11488359
    Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology captures image data using at least one camera of the client device. The subject technology generates depth data using a machine learning model based at least in part on the captured image data. The subject technology applies, to the image data and the depth data, the 3D effect based at least in part on the augmented reality content generator.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: November 1, 2022
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Patent number: 11481978
    Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: October 25, 2022
    Assignee: Snap Inc.
    Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
  • Patent number: 11457196
    Abstract: The subject technology selects a set of augmented reality content generators from a plurality of available augmented reality content generator based on metadata associated with each respective augmented reality content generator. The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology captures image data and depth data using at least one camera of the client device. The subject technology applies, to the image data and the depth data, the 3D effect based at least in part on the augmented reality content generator.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: September 27, 2022
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Publication number: 20220284682
    Abstract: The subject technology receives image data and depth data. The subject technology selects an augmented reality content generator corresponding to a three-dimensional (3D) effect. The subject technology applies the 3D effect to the image data and the depth data based at least in part on the selected augmented reality content generator. The subject technology generates, using a processor, a message including information related to the applied 3D effect, the image data, and the depth data.
    Type: Application
    Filed: November 12, 2021
    Publication date: September 8, 2022
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Patent number: 11410401
    Abstract: The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: August 9, 2022
    Assignee: Snap Inc.
    Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
  • Publication number: 20220156956
    Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
  • Publication number: 20220150330
    Abstract: An example method comprises: receiving, at a server from a first client device, a request for access to a client feature on the first client device; determining, by the server, an applicable rule for the access request, the applicable rule having a plurality of nodes; determining, by the server, device capabilities needed for the determined rule; determining, by the server, nodes that can be executed and nodes that cannot be executed, based on the device capabilities, the nodes that can be executed including device hardware capabilities and the nodes that cannot be executed including real-time device capabilities; executing, by the server nodes that can be executed to reach a partial decision for the applicable rule; pruning the applicable rule to remove executed nodes and generate a pruned rule that includes nodes that cannot be executed; transmitting the pruned rule and partial decision to the first client device.
    Type: Application
    Filed: January 24, 2022
    Publication date: May 12, 2022
    Inventors: Michael Ronald Cieslak, Jiayao Yu, Kai Chen, Farnaz Azmoodeh, Michael David Marr, Jun Huang, Zahra Ferdowsi, Dhritiman Sagar