Patents by Inventor ATISHAY JAIN
ATISHAY JAIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230342258Abstract: A method including: initiating a migration of data from a source system to a destination system, the migration of data being configured to proceed based on a checkpoint set that includes a plurality of checkpoints; detecting whether any of the checkpoints in the checkpoint set is reached; obtaining a health score for the source system, the health score being obtained in response to any of the checkpoints being reached; if the health score satisfies a condition, executing an action to prevent a loss of data that is being migrated; and if the health score does not satisfy the condition, abstaining from executing the action to prevent the loss of data.Type: ApplicationFiled: April 22, 2022Publication date: October 26, 2023Applicant: Dell Products L.P.Inventors: Parminder Singh Sethi, Lakshmi S. Nalam, Atishay Jain
-
Publication number: 20230214309Abstract: An apparatus comprises a processing device configured to receive system state information corresponding to one or more devices, to predict a usage frequency of the system state information using one or more machine learning models, and to determine, based at least in part on the usage frequency, a compression level for storage of the system state information. The compression level is applied to the system state information to generate at least one compressed file for transmission to a database.Type: ApplicationFiled: January 5, 2022Publication date: July 6, 2023Inventors: Parminder Singh Sethi, Lakshmi Saroja Nalam, Atishay Jain
-
Patent number: 11663463Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.Type: GrantFiled: July 10, 2019Date of Patent: May 30, 2023Assignee: Adobe Inc.Inventors: Kumar Ayush, Atishay Jain
-
Patent number: 11250206Abstract: A system and method for converting a form to an action card format for a chat-based application is described. The system accesses an unfilled form and identifies one or more converters based on a format of the unfilled form. The system then identifies fields in the unfilled form using the one or more converters. A document model is generated based on the fields and a layout of the fields. The system determines the layout based on a visual alignment and logical relation of the fields. The system forms a digital interactive workflow based on the document model.Type: GrantFiled: September 20, 2019Date of Patent: February 15, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Atishay Jain, Pratik Kumar Jawanpuria, Rohit Srivastava, Purushottam Kulkarni
-
Patent number: 11238093Abstract: Systems and methods for content-based video retrieval are described. The systems and methods may break a video into multiple frames, generate a feature vector from the frames based on the temporal relationship between them, and then embed the feature vector into a vector space along with a vector representing a search query. In some embodiments, the video feature vector is converted into a text caption prior to the embedding. In other embodiments, the video feature vector and a sentence vector are each embedded into a common space using a join video sentence embedding model. Once the video and the search query are embedded into a common vector space, a distance between them may be calculated. After calculating the distance between the search query and set of videos, the distances may be used to select a subset of the videos to present as the result of the search.Type: GrantFiled: October 15, 2019Date of Patent: February 1, 2022Assignee: ADOBE INC.Inventors: Kumar Ayush, Harnish Lakhani, Atishay Jain
-
Publication number: 20210133850Abstract: Techniques for providing a machine learning prediction of a recommended product to a user using augmented reality include identifying at least one real-world object and a virtual product in an AR viewpoint of the user. The AR viewpoint includes a camera image of the real-world object(s) and an image of the virtual product. The image of the virtual product is inserted into the camera image of the real-world object. A candidate product is predicted from a set of recommendation images using a machine learning algorithm based on, for example, a type of the virtual product to provide a recommendation that includes both the virtual product and the candidate product. The recommendation can include different types of products that are complementary to each other, in an embodiment. An image of the selected candidate product is inserted into the AR viewpoint along with the image of the virtual product.Type: ApplicationFiled: November 6, 2019Publication date: May 6, 2021Applicant: ADOBE INC.Inventors: Kumar Ayush, Harnish Naresh Lakhani, Atishay Jain
-
Patent number: 10984467Abstract: The technology described herein is directed to object compatibility-based identification and replacement of objects in digital representations of real-world environments for contextualized content delivery. In some implementations, an object compatibility and retargeting service that selects and analyzes a viewpoint (received from a user's client device) to identify objects that are the least compatible with other surrounding real-world objects in terms of style compatibility with the surrounding real-world objects and color compatibility with the background is described. The object compatibility and retargeting service also generates recommendations for replacing the least compatible object with objects/products having more style/design compatibility with the surrounding real-world objects and color compatibility with the background.Type: GrantFiled: February 21, 2019Date of Patent: April 20, 2021Assignee: Adobe Inc.Inventors: Kumar Ayush, Harnish Lakhani, Atishay Jain
-
Publication number: 20210109966Abstract: Systems and methods for content-based video retrieval are described. The systems and methods may break a video into multiple frames, generate a feature vector from the frames based on the temporal relationship between them, and then embed the feature vector into a vector space along with a vector representing a search query. In some embodiments, the video feature vector is converted into a text caption prior to the embedding. In other embodiments, the video feature vector and a sentence vector are each embedded into a common space using a join video sentence embedding model. Once the video and the search query are embedded into a common vector space, a distance between them may be calculated. After calculating the distance between the search query and set of videos, the distances may be used to select a subset of the videos to present as the result of the search.Type: ApplicationFiled: October 15, 2019Publication date: April 15, 2021Inventors: KUMAR AYUSH, HARNISH LAKHANI, ATISHAY JAIN
-
Publication number: 20210089618Abstract: A system and method for converting a form to an action card format for a chat-based application is described. The system accesses an unfilled form and identifies one or more converters based on a format of the unfilled form. The system then identifies fields in the unfilled form using the one or more converters. A document model is generated based on the fields and a layout of the fields. The system determines the layout based on a visual alignment and logical relation of the fields. The system forms a digital interactive workflow based on the document model.Type: ApplicationFiled: September 20, 2019Publication date: March 25, 2021Inventors: Atishay Jain, Pratik Kumar Jawanpuria, Rohit Srivastava, Purushottam Kulkarni
-
Publication number: 20210044559Abstract: A system includes a compiler that monitors activity of a user in a chat application, the chat application including an electronic platform for one or more individuals to communicate in a group in real-time over a computer network. A processor is connected to the compiler and generates a ranked list of groups in the chat application based on the activity of the user. A display is connected to the processor and displays the groups on an interface of the user based on the ranked list where higher ranked groups are displayed above lower ranked groups in the interface of the user.Type: ApplicationFiled: August 9, 2019Publication date: February 11, 2021Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Atishay JAIN, Pratik Kumar JAWANPURIA, Purushottam Madhukar KULKARNI
-
Publication number: 20210012201Abstract: A location-sensitive saliency prediction neural network generates location-sensitive saliency data for an image. The location-sensitive saliency prediction neural network includes, at least, a filter module, an inception module, and a location-bias module. The filter module extracts visual features at multiple contextual levels, and generates a feature map of the image. The inception module generates a multi-scale semantic structure, based on multiple scales of semantic content depicted in the image. In some cases, the inception block performs parallel analysis of the feature map, such as by parallel multiple layers, to determine the multiple scales of semantic content. The location-bias module generates a location-sensitive saliency map of location-dependent context of the image based on the multi-scale semantic structure and on a bias map. In some cases, the bias map indicates location-specific weights for one or more regions of the image.Type: ApplicationFiled: July 10, 2019Publication date: January 14, 2021Inventors: Kumar Ayush, Atishay Jain
-
Patent number: 10853686Abstract: In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.Type: GrantFiled: May 3, 2019Date of Patent: December 1, 2020Assignee: ADOBE INC.Inventors: Atishay Jain, Shamit Kumar Mehta, Stakshi Jindal Garg, Geoffrey Charles Dowd, Arian Behzadi, Joseph Michael Andolina, David Ericksen
-
Patent number: 10803598Abstract: A ball detection and tracking system including one or more visual sensors and a detection and tracking agent that ranks a plurality of blob detection algorithms based on a detection metric and uses a selected base detection algorithm to identify one or more candidate blobs. Based on this, the agent is able to generate a track for the candidate blobs and assign one or more subsequent candidate blobs to a best ranked one of the tracks if the assignment satisfies a cost threshold.Type: GrantFiled: June 19, 2018Date of Patent: October 13, 2020Inventors: Pankaj Chaurasia, Atishay Jain, Raghav Gupta, Nitesh Chourasia, Ansh Chaurasia
-
Publication number: 20200273090Abstract: The technology described herein is directed to object compatibility-based identification and replacement of objects in digital representations of real-world environments for contextualized content delivery. In some implementations, an object compatibility and retargeting service that selects and analyzes a viewpoint (received from a user's client device) to identify objects that are the least compatible with other surrounding real-world objects in terms of style compatibility with the surrounding real-world objects and color compatibility with the background is described. The object compatibility and retargeting service also generates recommendations for replacing the least compatible object with objects/products having more style/design compatibility with the surrounding real-world objects and color compatibility with the background.Type: ApplicationFiled: February 21, 2019Publication date: August 27, 2020Inventors: Kumar Ayush, Harnish Lakhani, Atishay Jain
-
Publication number: 20190258893Abstract: In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.Type: ApplicationFiled: May 3, 2019Publication date: August 22, 2019Inventors: Atishay Jain, Shamit Kumar Mehta, Stakshi Jindal Garg, Geoffrey Charles Dowd, Arian Behzadi, Joseph Michael Andolina, David Ericksen
-
Patent number: 10325175Abstract: In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.Type: GrantFiled: August 19, 2016Date of Patent: June 18, 2019Assignee: Adobe Inc.Inventors: Atishay Jain, Shamit Kumar Mehta, Stakshi Jindal Garg, Geoffrey Charles Dowd, Arian Behzadi, Joseph Michael Andolina, David Ericksen
-
Publication number: 20180374217Abstract: A ball detection and tracking system including one or more visual sensors and a detection and tracking agent that ranks a plurality of blob detection algorithms based on a detection metric and uses a selected base detection algorithm to identify one or more candidate blobs. Based on this, the agent is able to generate a track for the candidate blobs and assign one or more subsequent candidate blobs to a best ranked one of the tracks if the assignment satisfies a cost threshold.Type: ApplicationFiled: June 19, 2018Publication date: December 27, 2018Inventors: Pankaj Chaurasia, Atishay Jain, Raghav Gupta, Nitesh Chourasia
-
Publication number: 20160358034Abstract: In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.Type: ApplicationFiled: August 19, 2016Publication date: December 8, 2016Inventors: ATISHAY JAIN, SHAMIT KUMAR MEHTA, STAKSHI JINDAL GARG, GEOFFREY CHARLES DOWD, ARIAN BEHZADI, JOSEPH MICHAEL ANDOLINA, DAVID ERICKSEN
-
Patent number: 9449248Abstract: In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.Type: GrantFiled: March 12, 2015Date of Patent: September 20, 2016Assignee: Adobe Systems IncorporatedInventors: Atishay Jain, Shamit Kumar Mehta, Stakshi Jindal Garg, Geoffrey Charles Dowd, Arian Behzadi, Joseph Michael Andolina, David Ericksen
-
Publication number: 20160267346Abstract: In various implementations, a computing device is configured to provide a live preview of salient contours generated on a live digital video feed. In particular, a designer can use a computing device with a camera, such as a smart phone, to view a real-time preview of salient contours generated from edges detected in frames of a live digital video feed prior to capture, thereby eliminating the unpredictability of salient contours generated from a previously captured image. In some implementations, the salient contours are overlaid on a greyscale conversion of the live digital video feed for improved processing and visual contrast. Other implementations modify aspects of edge-detecting or post-processing filters for improved performance on mobile computing devices.Type: ApplicationFiled: March 12, 2015Publication date: September 15, 2016Inventors: ATISHAY JAIN, SHAMIT KUMAR MEHTA, STAKSHI JINDAL GARG, GEOFFREY CHARLES DOWD, ARIAN BEHZADI, JOSEPH MICHAEL ANDOLINA, DAVID ERICKSEN