Patents by Inventor Kuldeep Kulkarni

Kuldeep Kulkarni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135197
    Abstract: Embodiments are disclosed for expanding a seed scene using proposals from a generative model of scene graphs. The method may include clustering subgraphs according to respective one or more maximal connected subgraphs of a scene graph. The scene graph includes a plurality of nodes and edges. The method also includes generating a scene sequence for the scene graph based on the clustered subgraphs. A first machine learning model determines a predicted node in response to receiving the scene sequence. A second machine learning model determines a predicted edge in response to receiving the scene sequence and the predicted node. A scene graph is output according to the predicted node and the predicted edge.
    Type: Application
    Filed: October 10, 2022
    Publication date: April 25, 2024
    Applicant: Adobe Inc.
    Inventors: Vishwa VINAY, Tirupati Saketh CHANDRA, Rishi AGARWAL, Kuldeep KULKARNI, Hiransh GUPTA, Aniruddha MAHAPATRA, Vaidehi Ramesh PATIL
  • Patent number: 11875585
    Abstract: Enhanced techniques and circuitry are presented herein for providing responses to user questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving a user question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the user question, ranking the set of passages according to relevance to the user question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the user question based at least on a selected semantic cluster.
    Type: Grant
    Filed: December 15, 2022
    Date of Patent: January 16, 2024
    Assignee: ADOBE INC.
    Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
  • Publication number: 20240012849
    Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.
    Type: Application
    Filed: July 11, 2022
    Publication date: January 11, 2024
    Applicant: Adobe Inc.
    Inventors: Praneetha VADDAMANU, Nihal JAIN, Paridhi MAHESHWARI, Kuldeep KULKARNI, Vishwa VINAY, Balaji Vasan SRINIVASAN, Niyati CHHAYA, Harshit AGRAWAL, Prabhat MAHAPATRA, Rizurekh SAHA
  • Publication number: 20240005587
    Abstract: Systems and methods for machine learning based controllable animation of still images is provided. In one embodiment, a still image including a fluid element is obtained. Using a flow refinement machine learning model, a refined dense optical flow is generated for the still image based on a selection mask that includes the fluid element and a dense optical flow generated from a motion hint that indicates a direction of animation. The refined dense optical flow indicates a pattern of apparent motion for the at least one fluid element. Thereafter, a plurality of video frames is generated by projecting a plurality of pixels of the still image using the refined dense optical flow.
    Type: Application
    Filed: July 1, 2022
    Publication date: January 4, 2024
    Inventors: Kuldeep KULKARNI, Aniruddha MAHAPATRA
  • Publication number: 20230326088
    Abstract: Embodiments are disclosed for user-guided variable-rate compression. A method of user-guided variable-rate compression includes receiving a request to compress an image, the request including the image, a corresponding importance data, and a target bitrate, providing the image, the corresponding importance data, and the target bitrate to a compression network, generating, by the compression network, a learned importance map and a representation of the image, and generating, by the compressing network, a compressed representation of the image based on the learned importance map and the representation of the image.
    Type: Application
    Filed: April 6, 2022
    Publication date: October 12, 2023
    Applicant: Adobe Inc.
    Inventors: Suryateja BV, Sharmila Reddy NANGI, Rushil GUPTA, Rajat JAISWAL, Nikhil KAPOOR, Kuldeep KULKARNI
  • Publication number: 20230169632
    Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.
    Type: Application
    Filed: November 8, 2021
    Publication date: June 1, 2023
    Inventors: Kuldeep Kulkarni, Soumya Dash, Hrituraj Singh, Bholeshwar Khurana, Aniruddha Mahapatra, Abhishek Bhatia
  • Publication number: 20230121355
    Abstract: Enhanced techniques and circuitry are presented herein for providing responses to user questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving a user question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the user question, ranking the set of passages according to relevance to the user question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the user question based at least on a selected semantic cluster.
    Type: Application
    Filed: December 15, 2022
    Publication date: April 20, 2023
    Inventors: Balaji Vasan SRINIVASAN, Sujith Sai VENNA, Kuldeep KULKARNI, Durga Prasad MARAM, Dasireddy Sai Shritishma REDDY
  • Patent number: 11556573
    Abstract: Enhanced techniques and circuitry are presented herein for providing responses to questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving an indication of a question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the question, ranking the set of passages according to relevance to the question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the question based at least on a selected semantic cluster.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: January 17, 2023
    Assignee: ADOBE INC.
    Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
  • Publication number: 20210374168
    Abstract: Enhanced techniques and circuitry are presented herein for providing responses to questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving an indication of a question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the question, ranking the set of passages according to relevance to the question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the question based at least on a selected semantic cluster.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
  • Patent number: 10210391
    Abstract: A method and system for detecting actions of an object in a scene from a video of the scene. The video is a video sequence partitioned into chunks, and each chunk includes consecutive video frames. The method including the following elements. Acquiring the video of the scene, wherein the video includes a sequence of images. Tracking the object in the video, and for each object and each chunk of the video, further comprising: determining a sequence of contour images from video frames of the video sequence to represent motion data within a bounding box located around the object. Using the bounding box to produce cropped contour images and cropped images for one or more images in each chunk. Passing the cropped contour images and the cropped images to a recurrent neural network (RNN) that outputs a relative score for each action of interest.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: February 19, 2019
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Michael Jones, Tim Marks, Kuldeep Kulkarni
  • Publication number: 20190042850
    Abstract: A method and system for detecting actions of an object in a scene from a video of the scene. The video is a video sequence partitioned into chunks, and each chunk includes consecutive video frames. The method including the following elements. Acquiring the video of the scene, wherein the video includes a sequence of images. Tracking the object in the video, and for each object and each chunk of the video, further comprising: determining a sequence of contour images from video frames of the video sequence to represent motion data within a bounding box located around the object. Using the bounding box to produce cropped contour images and cropped images for one or more images in each chunk. Passing the cropped contour images and the cropped images to a recurrent neural network (RNN) that outputs a relative score for each action of interest.
    Type: Application
    Filed: August 7, 2017
    Publication date: February 7, 2019
    Inventors: Michael Jones, Tim Marks, Kuldeep Kulkarni
  • Publication number: 20180039637
    Abstract: The disclosed embodiments illustrate methods and systems for multimedia processing to identify concepts in multimedia content. The method includes receiving the multimedia content ant at least one annotation of multimedia content at a computing device from another computing device. The received at least one annotation includes a plurality of keywords that is representative of at least a plurality of concepts in the received multimedia content. The method further includes extracting a plurality of features from the received multimedia content by performing a statistical analysis of the multimedia content, based on the plurality of keywords in the at least one annotation. The method further includes identifying the plurality of concepts in a set of frames of the multimedia content by use of one or more classifiers. The one or more classifiers are trained, based on at the extracted plurality of features.
    Type: Application
    Filed: August 2, 2016
    Publication date: February 8, 2018
    Inventors: Ankit Gandhi, Arijit Biswas, Om D. Deshmukh, Sohil Shah, Kuldeep Kulkarni