Patents by Inventor Sumant Milind Hanumante
Sumant Milind Hanumante has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240331244Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.Type: ApplicationFiled: June 12, 2024Publication date: October 3, 2024Inventors: Ebony James Charlton, Sumant Milind Hanumante, Zhou Ren, Dhritiman Sagar
-
Publication number: 20240272432Abstract: Augmented reality experiences of a user wearing an electronic eyewear device are captured by at least one camera on a frame of the electronic eyewear device, the at least one camera having a field of view that is larger than a field of view of a display of the electronic eyewear device. An augmented reality feature or object is applied to the captured scene. A photo or video of the augmented reality scene is captured and a first portion of the captured photo or video is displayed in the display. The display is adjusted to display a second portion of the captured photo or video with the augmented reality features as the user moves the user's head to view the second portion of the captured photo or video. The captured photo or video may be transferred to another device for viewing the larger field of view augmented reality image.Type: ApplicationFiled: April 24, 2024Publication date: August 15, 2024Inventors: David Meisenholder, Dhritiman Sagar, Ilteris Canberk, Justin Wilder, Sumant Milind Hanumante, James Powderly
-
Patent number: 11982808Abstract: Augmented reality experiences of a user wearing an electronic eyewear device are captured by at least one camera on a frame of the electronic eyewear device, the at least one camera having a field of view that is larger than a field of view of a display of the electronic eyewear device. An augmented reality feature or object is applied to the captured scene. A photo or video of the augmented reality scene is captured and a first portion of the captured photo or video is displayed in the display. The display is adjusted to display a second portion of the captured photo or video with the augmented reality features as the user moves the user's head to view the second portion of the captured photo or video. The captured photo or video may be transferred to another device for viewing the larger field of view augmented reality image.Type: GrantFiled: May 16, 2022Date of Patent: May 14, 2024Assignee: Snap Inc.Inventors: David Meisenholder, Dhritiman Sagar, Ilteris Canberk, Justin Wilder, Sumant Milind Hanumante, James Powderly
-
Publication number: 20240062335Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.Type: ApplicationFiled: November 1, 2023Publication date: February 22, 2024Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
-
Patent number: 11847760Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.Type: GrantFiled: April 6, 2022Date of Patent: December 19, 2023Assignee: Snap Inc.Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
-
Publication number: 20230343046Abstract: Augmented reality (AR) and virtual reality (VR) environment enhancement using an eyewear device. The eyewear device includes an image capture system, a display system, and a position detection system. The image capture system and position detection system identify feature points within a point cloud that represents captured images of an environment. The display system presents image overlays to a user including enhancement graphics positioned at the feature points within the environment.Type: ApplicationFiled: June 30, 2023Publication date: October 26, 2023Inventors: Ilteris Canberk, Sumant Milind Hanumante, Stanislav Minakov, Dhritiman Sagar
-
Patent number: 11715223Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: GrantFiled: January 31, 2022Date of Patent: August 1, 2023Assignee: Snap Inc.Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Publication number: 20230214639Abstract: Techniques for training a neural network having a plurality of computational layers with associated weights and activations for computational layers in fixed-point formats include determining an optimal fractional length for weights and activations for the computational layers; training a learned clipping-level with fixed-point quantization using a PACT process for the computational layers; and quantizing on effective weights that fuses a weight of a convolution layer with a weight and running variance from a batch normalization layer. A fractional length for weights of the computational layers is determined from current values of weights using the determined optimal fractional length for the weights of the computational layers. A fixed-point activation between adjacent computational layers is related using PACT quantization of the clipping-level and an activation fractional length from a node in a following computational layer.Type: ApplicationFiled: December 31, 2021Publication date: July 6, 2023Inventors: Sumant Milind Hanumante, Qing Jin, Sergei Korolev, Denys Makoviichuk, Jian Ren, Dhritiman Sagar, Patrick Timothy McSweeney Simons, Sergey Tulyakov, Yang Wen, Richard Zhuang
-
Publication number: 20230024608Abstract: Among other things, embodiments of the present disclosure improve the functionality of computer software and systems by facilitating the automatic performance optimization of a software application based on the particular platform upon which the application runs. In some embodiments, the system can automatically choose a set of parameters or methods at run-time from a design space with pre-selected optimization methods and parameters (e.g., algorithms, software libraries, and/or hardware accelerators) for a specific task.Type: ApplicationFiled: August 1, 2022Publication date: January 26, 2023Inventors: Guohui Wang, Fenglei Tian, Samuel Edward Hare, Sumant Milind Hanumante, Tony Mathew
-
Publication number: 20220373796Abstract: Augmented reality experiences of a user wearing an electronic eyewear device are captured by at least one camera on a frame of the electronic eyewear device, the at least one camera having a field of view that is larger than a field of view of a display of the electronic eyewear device. An augmented reality feature or object is applied to the captured scene. A photo or video of the augmented reality scene is captured and a first portion of the captured photo or video is displayed in the display. The display is adjusted to display a second portion of the captured photo or video with the augmented reality features as the user moves the user's head to view the second portion of the captured photo or video. The captured photo or video may be transferred to another device for viewing the larger field of view augmented reality image.Type: ApplicationFiled: May 16, 2022Publication date: November 24, 2022Inventors: David Meisenholder, Dhritiman Sagar, Ilteris Canberk, Justin Wilder, Sumant Milind Hanumante, James Powderly
-
Publication number: 20220230277Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.Type: ApplicationFiled: April 6, 2022Publication date: July 21, 2022Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
-
Publication number: 20220156956Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: ApplicationFiled: January 31, 2022Publication date: May 19, 2022Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Patent number: 11315219Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.Type: GrantFiled: May 29, 2020Date of Patent: April 26, 2022Assignee: Snap Inc.Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
-
Patent number: 11276190Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: GrantFiled: April 27, 2020Date of Patent: March 15, 2022Assignee: Snap Inc.Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Publication number: 20200294195Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.Type: ApplicationFiled: May 29, 2020Publication date: September 17, 2020Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
-
Publication number: 20200258248Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: ApplicationFiled: April 27, 2020Publication date: August 13, 2020Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Patent number: 10713754Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.Type: GrantFiled: February 28, 2018Date of Patent: July 14, 2020Assignee: Snap Inc.Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
-
Patent number: 10672136Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: GrantFiled: August 31, 2018Date of Patent: June 2, 2020Assignee: Snap Inc.Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Publication number: 20200074653Abstract: An active depth detection system can generate a depth map from an image and user interaction data, such as a pair of clicks. The active depth detection system can be implemented as a recurrent neural network that can receive the user interaction data as runtime inputs after training. The active depth detection system can store the generated depth map for further processing, such as image manipulation or real-world object detection.Type: ApplicationFiled: August 31, 2018Publication date: March 5, 2020Inventors: Kun Duan, Daniel Ron, Chongyang Ma, Ning Xu, Shenlong Wang, Sumant Milind Hanumante, Dhritiman Sagar
-
Publication number: 20190297461Abstract: A venue system of a client device can submit a location request to a server, which returns multiple venues that are near the client device. The client device can use one or more machine learning schemes (e.g., convolutional neural networks) to determine that the client device is located in one of specific venues of the possible venues. The venue system can further select imagery for presentation based on the venue selection. The presentation may be published as ephemeral message on a network platform.Type: ApplicationFiled: March 7, 2019Publication date: September 26, 2019Inventors: Ebony James Charlton, Sumant Milind Hanumante, Zhou Ren, Dhritiman Sagar