Patents by Inventor Dhritiman Sagar
Dhritiman Sagar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11290576Abstract: An example method comprises: receiving, at a server from a first client device, a request for access to a client feature on the first client device; determining, by the server, an applicable rule for the access request, the applicable rule having a plurality of nodes; determining, by the server, device capabilities needed for the determined rule; determining, by the server, nodes that can be executed and nodes that cannot be executed, based on the device capabilities, the nodes that can be executed including device hardware capabilities and the nodes that cannot be executed including real-time device capabilities; executing, by the server nodes that can be executed to reach a partial decision for the applicable rule; pruning the applicable rule to remove executed nodes and generate a pruned rule that includes nodes that cannot be executed; transmitting the pruned rule and partial decision to the first client device.Type: GrantFiled: February 27, 2020Date of Patent: March 29, 2022Assignee: Snap Inc.Inventors: Michael Ronald Cieslak, Jiayao Yu, Kai Chen, Farnaz Azmoodeh, Michael David Marr, Jun Huang, Zahra Ferdowsi, Dhritiman Sagar
-
Patent number: 11276190Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: GrantFiled: April 27, 2020Date of Patent: March 15, 2022Assignee: Snap Inc.Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Publication number: 20220043508Abstract: Systems, devices, media, and methods are presented for displaying a virtual object on the display of a portable eyewear device using motion data gathered by a face-tracking application on a mobile device. A controller engine leverages the processing power of a mobile device to locate the face supporting the eyewear, locate the hand holding the mobile device, acquire the motion data, and calculate an apparent path of the virtual object. The virtual object is displayed in a series of locations along the apparent path, based on both the course traveled by the mobile device (in the hand) and the track traveled by the eyewear device (on the face), so that the virtual object is persistently viewable to the user.Type: ApplicationFiled: October 21, 2021Publication date: February 10, 2022Inventors: Ilteris Canberk, Dhritiman Sagar, Mathieu Emmanuel Vignau
-
Publication number: 20210390780Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.Type: ApplicationFiled: June 13, 2020Publication date: December 16, 2021Inventors: Ilteris Canberk, Sumant Hanumante, Dhritiman Sagar, Stanislav Minakov
-
Publication number: 20210392465Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.Type: ApplicationFiled: June 22, 2021Publication date: December 16, 2021Inventors: Ebony James Charlton, Sumant Hanumante, Zhou Ren, Dhritiman Sagar
-
Publication number: 20210383583Abstract: A content display system can control which content and how the content is displayed based on viewing parameters, such as a map zoom level, and physical distance parameters, e.g., a geo-fence distance and an icon visibility distance. Different combinations of input (e.g., zoom level and physical distances) yield a myriad of pre-set content displays on the client device, thereby allowing a creator of an icon to finely tune how content displayed otherwise accessed.Type: ApplicationFiled: May 18, 2021Publication date: December 9, 2021Inventors: Ebony James Charlton, Dhritiman Sagar, Daniel Vincent Grippi
-
Publication number: 20210375056Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.Type: ApplicationFiled: May 17, 2021Publication date: December 2, 2021Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
-
Patent number: 11189104Abstract: The subject technology receives image data and depth data. The subject technology selects an augmented reality content generator corresponding to a three-dimensional (3D) effect. The subject technology applies the 3D effect to the image data and the depth data based at least in part on the selected augmented reality content generator. The subject technology generates, using a processor, a message including information related to the applied 3D effect, the image data, and the depth data.Type: GrantFiled: August 28, 2020Date of Patent: November 30, 2021Assignee: Snap Inc.Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Patent number: 11169600Abstract: Systems, devices, media, and methods are presented for displaying a virtual object on the display of a portable eyewear device using motion data gathered by a face-tracking application on a mobile device. A controller engine leverages the processing power of a mobile device to locate the face supporting the eyewear, locate the hand holding the mobile device, acquire the motion data, and calculate an apparent path of the virtual object. The virtual object is displayed in a series of locations along the apparent path, based on both the course traveled by the mobile device (in the hand) and the track traveled by the eyewear device (on the face), so that the virtual object is persistently viewable to the user.Type: GrantFiled: November 17, 2020Date of Patent: November 9, 2021Assignee: Snap Inc.Inventors: Ilteris Canberk, Dhritiman Sagar, Mathieu Vignau
-
Patent number: 11051129Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.Type: GrantFiled: March 7, 2019Date of Patent: June 29, 2021Assignee: Snap Inc.Inventors: Ebony James Charlton, Sumant Hanumante, Zhou Ren, Dhritiman Sagar
-
Patent number: 11037372Abstract: A context based augmented reality system can be used to display augmented reality elements over a live video feed on a client device. The augmented reality elements can be selected based on a number of context inputs generated by the client device. The context inputs can include location data of the client device and location data of nearby physical places that have preconfigured augmented elements. The preconfigured augmented elements can be preconfigured to exhibit a design scheme of the corresponding physical place.Type: GrantFiled: January 16, 2020Date of Patent: June 15, 2021Assignee: Snap Inc.Inventors: Ebony James Charlton, Jokubas Dargis, Eitan Pilipski, Dhritiman Sagar, Victor Shaburov
-
Patent number: 11030787Abstract: A content display system can control which content and how the content is displayed based on viewing parameters, such as a map zoom level, and physical distance parameters, e.g., a geo-fence distance and an icon visibility distance. Different combinations of input (e.g., zoom level and physical distances) yield a myriad of pre-set content displays on the client device, thereby allowing a creator of an icon to finely tune how content displayed otherwise accessed.Type: GrantFiled: January 17, 2020Date of Patent: June 8, 2021Assignee: Snap Inc.Inventors: Ebony James Charlton, Dhritiman Sagar, Daniel Vincent Grippi
-
Publication number: 20210099551Abstract: An example method comprises: receiving, at a server from a first client device, a request for access to a client feature on the first client device; determining, by the server, an applicable rule for the access request, the applicable rule having a plurality of nodes; determining, by the server, device capabilities needed for the determined rule; determining, by the server, nodes that can be executed and nodes that cannot be executed, based on the device capabilities, the nodes that can be executed including device hardware capabilities and the nodes that cannot be executed including real-time device capabilities; executing, by the server nodes that can be executed to reach a partial decision for the applicable rule; pruning the applicable rule to remove executed nodes and generate a pruned rule that includes nodes that cannot be executed; transmitting the pruned rule and partial decision to the first client device.Type: ApplicationFiled: February 27, 2020Publication date: April 1, 2021Inventors: Michael Ronald Cieslak, Jiayao Yu, Kai Chen, Farnaz Azmoodeh, Michael David Marr, Jun Huang, Zahra Ferdowsi, Dhritiman Sagar
-
Publication number: 20210065454Abstract: The subject technology receives image data and depth data. The subject technology selects an augmented reality content generator corresponding to a three-dimensional (3D) effect. The subject technology applies the 3D effect to the image data and the depth data based at least in part on the selected augmented reality content generator. The subject technology generates, using a processor, a message including information related to the applied 3D effect, the image data, and the depth data.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20210065448Abstract: The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology captures image data using at least one camera of the client device. The subject technology generates depth data using a machine learning model based at least in part on the captured image data. The subject technology applies, to the image data and the depth data, the 3D effect based at least in part on the augmented reality content generator.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Matthew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20210065464Abstract: The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20210067756Abstract: The subject technology selects a set of augmented reality content generators from a plurality of available augmented reality content generator based on metadata associated with each respective augmented reality content generator. The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology captures image data and depth data using at least one camera of the client device.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Inventors: Kyle Goodrich, Samuel Edward Hare, Maxim Maximov Lazarov, Tony Mathew, Andrew James McPhee, Daniel Moreno, Dhritiman Sagar, Wentao Shang
-
Publication number: 20200410756Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.Type: ApplicationFiled: September 16, 2020Publication date: December 31, 2020Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
-
Patent number: 10803664Abstract: A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.Type: GrantFiled: April 20, 2020Date of Patent: October 13, 2020Assignee: Snap Inc.Inventors: Andrew James McPhee, Samuel Edward Hare, Peicheng Yu, Robert Cornelius Murphy, Dhritiman Sagar
-
Publication number: 20200258248Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: ApplicationFiled: April 27, 2020Publication date: August 13, 2020Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar