Patents by Inventor Kuldeep Kulkarni
Kuldeep Kulkarni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240135197Abstract: Embodiments are disclosed for expanding a seed scene using proposals from a generative model of scene graphs. The method may include clustering subgraphs according to respective one or more maximal connected subgraphs of a scene graph. The scene graph includes a plurality of nodes and edges. The method also includes generating a scene sequence for the scene graph based on the clustered subgraphs. A first machine learning model determines a predicted node in response to receiving the scene sequence. A second machine learning model determines a predicted edge in response to receiving the scene sequence and the predicted node. A scene graph is output according to the predicted node and the predicted edge.Type: ApplicationFiled: October 10, 2022Publication date: April 25, 2024Applicant: Adobe Inc.Inventors: Vishwa VINAY, Tirupati Saketh CHANDRA, Rishi AGARWAL, Kuldeep KULKARNI, Hiransh GUPTA, Aniruddha MAHAPATRA, Vaidehi Ramesh PATIL
-
Patent number: 11875585Abstract: Enhanced techniques and circuitry are presented herein for providing responses to user questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving a user question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the user question, ranking the set of passages according to relevance to the user question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the user question based at least on a selected semantic cluster.Type: GrantFiled: December 15, 2022Date of Patent: January 16, 2024Assignee: ADOBE INC.Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
-
Publication number: 20240012849Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.Type: ApplicationFiled: July 11, 2022Publication date: January 11, 2024Applicant: Adobe Inc.Inventors: Praneetha VADDAMANU, Nihal JAIN, Paridhi MAHESHWARI, Kuldeep KULKARNI, Vishwa VINAY, Balaji Vasan SRINIVASAN, Niyati CHHAYA, Harshit AGRAWAL, Prabhat MAHAPATRA, Rizurekh SAHA
-
Publication number: 20240005587Abstract: Systems and methods for machine learning based controllable animation of still images is provided. In one embodiment, a still image including a fluid element is obtained. Using a flow refinement machine learning model, a refined dense optical flow is generated for the still image based on a selection mask that includes the fluid element and a dense optical flow generated from a motion hint that indicates a direction of animation. The refined dense optical flow indicates a pattern of apparent motion for the at least one fluid element. Thereafter, a plurality of video frames is generated by projecting a plurality of pixels of the still image using the refined dense optical flow.Type: ApplicationFiled: July 1, 2022Publication date: January 4, 2024Inventors: Kuldeep KULKARNI, Aniruddha MAHAPATRA
-
Publication number: 20230326088Abstract: Embodiments are disclosed for user-guided variable-rate compression. A method of user-guided variable-rate compression includes receiving a request to compress an image, the request including the image, a corresponding importance data, and a target bitrate, providing the image, the corresponding importance data, and the target bitrate to a compression network, generating, by the compression network, a learned importance map and a representation of the image, and generating, by the compressing network, a compressed representation of the image based on the learned importance map and the representation of the image.Type: ApplicationFiled: April 6, 2022Publication date: October 12, 2023Applicant: Adobe Inc.Inventors: Suryateja BV, Sharmila Reddy NANGI, Rushil GUPTA, Rajat JAISWAL, Nikhil KAPOOR, Kuldeep KULKARNI
-
Publication number: 20230169632Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.Type: ApplicationFiled: November 8, 2021Publication date: June 1, 2023Inventors: Kuldeep Kulkarni, Soumya Dash, Hrituraj Singh, Bholeshwar Khurana, Aniruddha Mahapatra, Abhishek Bhatia
-
Publication number: 20230121355Abstract: Enhanced techniques and circuitry are presented herein for providing responses to user questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving a user question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the user question, ranking the set of passages according to relevance to the user question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the user question based at least on a selected semantic cluster.Type: ApplicationFiled: December 15, 2022Publication date: April 20, 2023Inventors: Balaji Vasan SRINIVASAN, Sujith Sai VENNA, Kuldeep KULKARNI, Durga Prasad MARAM, Dasireddy Sai Shritishma REDDY
-
Patent number: 11556573Abstract: Enhanced techniques and circuitry are presented herein for providing responses to questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving an indication of a question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the question, ranking the set of passages according to relevance to the question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the question based at least on a selected semantic cluster.Type: GrantFiled: May 29, 2020Date of Patent: January 17, 2023Assignee: ADOBE INC.Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
-
Publication number: 20210374168Abstract: Enhanced techniques and circuitry are presented herein for providing responses to questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving an indication of a question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the question, ranking the set of passages according to relevance to the question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the question based at least on a selected semantic cluster.Type: ApplicationFiled: May 29, 2020Publication date: December 2, 2021Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
-
Patent number: 10210391Abstract: A method and system for detecting actions of an object in a scene from a video of the scene. The video is a video sequence partitioned into chunks, and each chunk includes consecutive video frames. The method including the following elements. Acquiring the video of the scene, wherein the video includes a sequence of images. Tracking the object in the video, and for each object and each chunk of the video, further comprising: determining a sequence of contour images from video frames of the video sequence to represent motion data within a bounding box located around the object. Using the bounding box to produce cropped contour images and cropped images for one or more images in each chunk. Passing the cropped contour images and the cropped images to a recurrent neural network (RNN) that outputs a relative score for each action of interest.Type: GrantFiled: August 7, 2017Date of Patent: February 19, 2019Assignee: Mitsubishi Electric Research Laboratories, Inc.Inventors: Michael Jones, Tim Marks, Kuldeep Kulkarni
-
Publication number: 20190042850Abstract: A method and system for detecting actions of an object in a scene from a video of the scene. The video is a video sequence partitioned into chunks, and each chunk includes consecutive video frames. The method including the following elements. Acquiring the video of the scene, wherein the video includes a sequence of images. Tracking the object in the video, and for each object and each chunk of the video, further comprising: determining a sequence of contour images from video frames of the video sequence to represent motion data within a bounding box located around the object. Using the bounding box to produce cropped contour images and cropped images for one or more images in each chunk. Passing the cropped contour images and the cropped images to a recurrent neural network (RNN) that outputs a relative score for each action of interest.Type: ApplicationFiled: August 7, 2017Publication date: February 7, 2019Inventors: Michael Jones, Tim Marks, Kuldeep Kulkarni
-
Publication number: 20180039637Abstract: The disclosed embodiments illustrate methods and systems for multimedia processing to identify concepts in multimedia content. The method includes receiving the multimedia content ant at least one annotation of multimedia content at a computing device from another computing device. The received at least one annotation includes a plurality of keywords that is representative of at least a plurality of concepts in the received multimedia content. The method further includes extracting a plurality of features from the received multimedia content by performing a statistical analysis of the multimedia content, based on the plurality of keywords in the at least one annotation. The method further includes identifying the plurality of concepts in a set of frames of the multimedia content by use of one or more classifiers. The one or more classifiers are trained, based on at the extracted plurality of features.Type: ApplicationFiled: August 2, 2016Publication date: February 8, 2018Inventors: Ankit Gandhi, Arijit Biswas, Om D. Deshmukh, Sohil Shah, Kuldeep Kulkarni