Patents by Inventor SIDDHANT JAIN
SIDDHANT JAIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240078701Abstract: A system and method for determining a location of a client device is described herein. In particular, a client device receives images captured by a camera at the client device. The client device identifies features in the images. The features may be line junctions, lines, curves, or any other features found in images. The client device retrieves a 3D map of the environment from the map database and compares the identified features to the 3D map of the environment, which includes map features such as map line junctions, map lines, map curves, and the like. The client device identifies a correspondence between the features identified from the images and the map features and determines a location of the client device in the real world based on the correspondence. The client device may display visual data representing a location in a virtual world corresponding to the location in the real world.Type: ApplicationFiled: November 8, 2023Publication date: March 7, 2024Inventors: Anvith Ekkati, Rong Yuan, Siddhant Jain, Si ying Diana Hu
-
Patent number: 11847792Abstract: A system and method for determining a location of a client device is described herein. In particular, a client device receives images captured by a camera at the client device. The client device identifies features in the images. The features may be line junctions, lines, curves, or any other features found in images. The client device retrieves a 3D map of the environment from the map database and compares the identified features to the 3D map of the environment, which includes map features such as map line junctions, map lines, map curves, and the like. The client device identifies a correspondence between the features identified from the images and the map features and determines a location of the client device in the real world based on the correspondence. The client device may display visual data representing a location in a virtual world corresponding to the location in the real world.Type: GrantFiled: December 18, 2020Date of Patent: December 19, 2023Assignee: NIANTIC, INC.Inventors: Anvith Ekkati, Rong Yuan, Siddhant Jain, Si ying Diana Hu
-
Publication number: 20230401385Abstract: A novel system is described for performing hierarchical named entity recognition (“HNER”) processing that includes identifying categories at different hierarchical levels for a named entity. The HNER system uses a novel architecture comprising an encoder model and a system of trained machine learning (ML) models to perform the HNER processing, where each trained model in the system of ML models corresponds to a particular hierarchical level, and each model is trained to extract one or more named entities and predict a category for each extracted named entity for the corresponding hierarchical level. Novel techniques are also described for training the various models in HNER system including an encoder model and models in the system of models.Type: ApplicationFiled: October 14, 2022Publication date: December 14, 2023Applicant: Oracle International CorporationInventors: Saransh Mehta, Siddhant Jain, Pramir Sarkar
-
Publication number: 20230100303Abstract: Systems and methods for fractional inference on GPU and CPU for large scale deployment of customized transformers based language models are disclosed herein. The method can include, receiving data for use in generation of a machine learning model output, ingesting the data with a first machine learning model on a Graphic Processing Unit, receiving at least one intermediate output from the first machine learning model at a temporary store, receiving the at least one intermediate output from the temporary store at a Central Processing Unit, ingesting the at least one intermediate output with a second machine learning model on the Central Processing Unit, and outputting a prediction with the second machine learning model.Type: ApplicationFiled: September 28, 2021Publication date: March 30, 2023Applicant: Oracle International CorporationInventors: Siddhant Jain, Saransh Mehta, Shahid Reza
-
Publication number: 20230048920Abstract: Systems and methods for implementing federated learning engine for integration of vertical and horizontal AI are disclosed herein. A method can include receiving a global model from a central aggregator communicatingly connected with a plurality of user environments, which global model including a plurality of layers. The method can include training a mini model on top of the global model with data gathered within the user environment, uploading the at least a portion of the mini model to the central aggregator, receiving a plurality of mini models, and creating a fusion model based on the received plurality of mini models.Type: ApplicationFiled: August 11, 2021Publication date: February 16, 2023Applicant: Oracle International CorporationInventors: Rajarshi Bhose, Shahid Reza, Siddhant Jain
-
Patent number: 11481619Abstract: Techniques for incorporating a black-box function into a neural network are described. For example, an image editing function may be the black-box function and may be wrapped into a layer of the neural network. A set of parameters and a source image are provided to the black-box function, and the output image that represents the source image with the set of parameters applied to the source image is output from the black-box function. To address the issue that the black-box function may not be differentiable, a loss optimization may calculate the gradients of the function using, for example, a finite differences calculation, and the gradients are used to train the neural network to ensure the output image is representative of an expected ground truth image.Type: GrantFiled: July 10, 2019Date of Patent: October 25, 2022Assignee: ADOBE INC.Inventors: Oliver Wang, Kevin Wampler, Kalyan Krishna Sunkavalli, Elya Shechtman, Siddhant Jain
-
Publication number: 20210190538Abstract: A system and method for determining a location of a client device is described herein. In particular, a client device receives images captured by a camera at the client device. The client device identifies features in the images. The features may be line junctions, lines, curves, or any other features found in images. The client device retrieves a 3D map of the environment from the map database and compares the identified features to the 3D map of the environment, which includes map features such as map line junctions, map lines, map curves, and the like. The client device identifies a correspondence between the features identified from the images and the map features and determines a location of the client device in the real world based on the correspondence. The client device may display visual data representing a location in a virtual world corresponding to the location in the real world.Type: ApplicationFiled: December 18, 2020Publication date: June 24, 2021Inventors: Anvith Ekkati, Rong Yuan, Siddhant Jain, Si ying Diana Hu
-
Publication number: 20210012189Abstract: Techniques for incorporating a black-box function into a neural network are described. For example, an image editing function may be the black-box function and may be wrapped into a layer of the neural network. A set of parameters and a source image are provided to the black-box function, and the output image that represents the source image with the set of parameters applied to the source image is output from the black-box function. To address the issue that the black-box function may not be differentiable, a loss optimization may calculate the gradients of the function using, for example, a finite differences calculation, and the gradients are used to train the neural network to ensure the output image is representative of an expected ground truth image.Type: ApplicationFiled: July 10, 2019Publication date: January 14, 2021Inventors: Oliver Wang, Kevin Wampler, Kalyan Krishna Sunkavalli, Elya Shechtman, Siddhant Jain
-
Patent number: 10546557Abstract: Overlay and screen recording techniques are described that enables separate recordings of a screen and one or more overlays that were displayed on the screen during recording. In one example, pixel values of an overlay are blended with pixel values of a screen to paint the overlay onto the screen in a transparent manner that is imperceptible to the human eye but allows for original screen pixel values to be recovered from areas of the screen that were visually occluded by the overlay. This enables a user to display recording controls and visual cues on their screen without having to worry about the overlay visually occluding any screen content during the recording. One or both of the separately recorded screen and overlay streams can then be output for playback to enable viewing of the individual streams without loss in quality or content of the individual streams.Type: GrantFiled: December 15, 2016Date of Patent: January 28, 2020Assignee: Adobe Inc.Inventors: Siddhant Jain, Renzil Leith DSouza
-
Patent number: 10475103Abstract: Provided are methods and techniques for providing a product recommendation to a user using augmented reality. A product recommendation system determines a user viewpoint, the viewpoint including an augmented product positioned in a camera image of the user's surroundings. Based on the viewpoint, the product recommendation system determines the position of the augmented product in the viewpoint and the similarity between the augmented product and other candidate products that are similar to the augmented product. The product recommendation system then creates a set of recommendation images, each recommendation image including an image of the candidate product that is substituted for the augmented product in the viewpoint. The product recommendation system can then evaluate the recommendation images based on overall color compatibility. Based on the evaluation, for example, the product recommendation system selects a recommendation image that is provided to the user.Type: GrantFiled: April 20, 2017Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventors: Gaurush Hiranandani, Sai Varun Reddy Maram, Kumar Ayush, Chinnaobireddy Varsha, Siddhant Jain
-
Publication number: 20180137835Abstract: Overlay and screen recording techniques are described that enables separate recordings of a screen and one or more overlays that were displayed on the screen during recording. In one example, pixel values of an overlay are blended with pixel values of a screen to paint the overlay onto the screen in a transparent manner that is imperceptible to the human eye but allows for original screen pixel values to be recovered from areas of the screen that were visually occluded by the overlay. This enables a user to display recording controls and visual cues on their screen without having to worry about the overlay visually occluding any screen content during the recording. One or both of the separately recorded screen and overlay streams can then be output for playback to enable viewing of the individual streams without loss in quality or content of the individual streams.Type: ApplicationFiled: December 15, 2016Publication date: May 17, 2018Applicant: Adobe Systems IncorporatedInventors: Siddhant Jain, Renzil Leith DSouza
-
Publication number: 20180121988Abstract: Provided are methods and techniques for providing a product recommendation to a user using augmented reality. A product recommendation system determines a user viewpoint, the viewpoint including an augmented product positioned in a camera image of the user's surroundings. Based on the viewpoint, the product recommendation system determines the position of the augmented product in the viewpoint and the similarity between the augmented product and other candidate products that are similar to the augmented product. The product recommendation system then creates a set of recommendation images, each recommendation image including an image of the candidate product that is substituted for the augmented product in the viewpoint. The product recommendation system can then evaluate the recommendation images based on overall color compatibility. Based on the evaluation, for example, the product recommendation system selects a recommendation image that is provided to the user.Type: ApplicationFiled: April 20, 2017Publication date: May 3, 2018Inventors: Gaurush HIRANANDANI, Sai Varun Reddy MARAM, Kumar AYUSH, Chinnaobireddy VARSHA, Siddhant JAIN
-
Patent number: 9786055Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to operations to facilitate real-time matting using local color estimation and propagation. In accordance with embodiments described herein, an unknown region is estimated based on a set of received boundary points (a zero-level contour that separates the foreground object from the background) and additional contours based on increasing distances from the zero-level contour. The background and foreground colors for each pixel in the unknown region can be estimated and utilized to propagate the foreground and background colors to the appropriate contours in the unknown region. The estimated background and foreground colors may also be utilized to determine the opacity and true background and foreground colors for each pixel in the unknown region which results in an image matted in real-time.Type: GrantFiled: March 29, 2016Date of Patent: October 10, 2017Assignee: Adobe Systems IncorporatedInventors: Renzil Leith Dsouza, Siddhant Jain
-
Publication number: 20170287136Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to operations to facilitate real-time matting using local color estimation and propagation. In accordance with embodiments described herein, an unknown region is estimated based on a set of received boundary points (a zero-level contour that separates the foreground object from the background) and additional contours based on increasing distances from the zero-level contour. The background and foreground colors for each pixel in the unknown region can be estimated and utilized to propagate the foreground and background colors to the appropriate contours in the unknown region. The estimated background and foreground colors may also be utilized to determine the opacity and true background and foreground colors for each pixel in the unknown region which results in an image matted in real-time.Type: ApplicationFiled: March 29, 2016Publication date: October 5, 2017Inventors: RENZIL LEITH DSOUZA, SIDDHANT JAIN