Patents by Inventor Subham Gupta

Subham Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11947983
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for customizing digital content tutorials for a user within a digital editing application based on user experience with editing tools. The disclosed system determines proficiency levels for a plurality of different portions of a digital content tutorial corresponding to a digital content editing task. The disclosed system generates tool proficiency scores associated with the user in a digital editing application in connection with the portions of the digital content tutorial. Specifically, the disclosed system generates the tool proficiency scores based on usage of tools corresponding to the portions. Additionally, the disclosed system generates a mapping for the user based on the tool proficiency scores associated with the user and the proficiency levels of the portions of the digital content tutorial. The disclosed system provides a customized digital content tutorial for display at a client device according to the mapping.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: April 2, 2024
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Padmassri Chandrashekar, Ankur Murarka
  • Patent number: 11941049
    Abstract: A system identifies a video comprising frames associated with content tags. The system detects features for each frame of the video. The system identifies, based on the detected features, scenes of the video. The system determines, for each frame for each scene, a frame score that indicates a number of content tags that match the other frames within the scene. The system selects, for each scene, a set of key frames that represent the scene based on the determined frame scores. The system receives a search query comprising a keyword. The system generates, for display, search results responsive to the search query including a dynamic preview of the video. The dynamic preview comprises an arrangement of frames of the video corresponding to each scene of the video. Each of the arrangement of frames is selected from the selected set of key frames representing the respective scene of the video.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: March 26, 2024
    Assignee: Adobe Inc.
    Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Publication number: 20240086212
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for customizing digital content tutorials for a user within a digital editing application based on user experience with editing tools. The disclosed system determines proficiency levels for a plurality of different portions of a digital content tutorial corresponding to a digital content editing task. The disclosed system generates tool proficiency scores associated with the user in a digital editing application in connection with the portions of the digital content tutorial. Specifically, the disclosed system generates the tool proficiency scores based on usage of tools corresponding to the portions. Additionally, the disclosed system generates a mapping for the user based on the tool proficiency scores associated with the user and the proficiency levels of the portions of the digital content tutorial. The disclosed system provides a customized digital content tutorial for display at a client device according to the mapping.
    Type: Application
    Filed: September 7, 2022
    Publication date: March 14, 2024
    Inventors: Subham Gupta, Padmassri Chandrashekar, Ankur Murarka
  • Publication number: 20240078730
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for generating object-specific-preset edits to be later applied to other digital images depicting a same object type or applying a previously generated object-specific-preset edit to an object of the same object type within a target digital image. For example, in some cases, the disclosed systems generate an object-specific-preset edit by determining a region of a particular localized edit in an edited digital image, identifying an edited object corresponding to the localized edit, and storing in a digital-image-editing document an object tag for the edited object and instructions for the localized edit. In certain implementations, the disclosed systems further apply such an object-specific-preset edit to a target object in a target digital image by determining transformed-positioning parameters for a localized edit from the object-specific-preset edit to the target object.
    Type: Application
    Filed: November 8, 2023
    Publication date: March 7, 2024
    Inventors: Subham Gupta, Arnab Sil, Anuradha .
  • Publication number: 20240062431
    Abstract: In implementations of systems for generating and propagating personal masking edits, a computing device implements a mask system to detect a face of a person depicted in a digital image displayed in a user interface of an application for editing digital content. The mask system determines an identifier for the person based on an identifier for the face. Edit data is received describing properties of an editing operation and a type of mask used to modify a particular portion of the person depicted in the digital image. The mask system edits an additional digital image identified based on the identifier of the person using the type of mask and the properties of the editing operation to modify the particular portion of the person as depicted in the additional digital image.
    Type: Application
    Filed: August 18, 2022
    Publication date: February 22, 2024
    Applicant: Adobe Inc.
    Inventors: Subham Gupta, Arnab Sil, Anuradha
  • Patent number: 11854131
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for generating object-specific-preset edits to be later applied to other digital images depicting a same object type or applying a previously generated object-specific-preset edit to an object of the same object type within a target digital image. For example, in some cases, the disclosed systems generate an object-specific-preset edit by determining a region of a particular localized edit in an edited digital image, identifying an edited object corresponding to the localized edit, and storing in a digital-image-editing document an object tag for the edited object and instructions for the localized edit. In certain implementations, the disclosed systems further apply such an object-specific-preset edit to a target object in a target digital image by determining transformed-positioning parameters for a localized edit from the object-specific-preset edit to the target object.
    Type: Grant
    Filed: January 13, 2021
    Date of Patent: December 26, 2023
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Arnab Sil, Anuradha
  • Publication number: 20230385980
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating marked digital images with content adaptive watermarks. In particular, in one or more embodiments, the disclosed systems intelligently evaluate a plurality of watermark configurations to select one or more content adaptive watermarks for one or more target digital images and generate one or more marked digital images by adding the selected content adaptive watermarks to the one or more target digital images.
    Type: Application
    Filed: May 27, 2022
    Publication date: November 30, 2023
    Inventors: Ankur Murarka, Padmassri Chandrashekar, Subham Gupta
  • Patent number: 11816147
    Abstract: Embodiments of the present invention are directed towards providing contextual tags for an image based on a contextual analysis of associated images captured in the same environment as the image. To determine contextual tags, content tags can be determined for images. The determined content tags can be associated with categories based on a contextual classification of the content tags. These associated content tags can then be designated as contextual tags for a respective category. To associate these contextual tags with the images, the images can be iterated through based on how the images relate to the contextual tags. For instance, when an image is associated with a category, the contextual tags classified into that category can be assigned to that image.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: November 14, 2023
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Publication number: 20230244368
    Abstract: In implementations of systems for generating and applying editing presets, a computing device implements a preset system to detect objects depicted in a digital image that is displayed in a user interface of an application for editing digital content. Input data is received describing an edited region of the digital image and properties of an editing operation performed in the edited region. The preset system identifies a particular detected object of the detected objects based on a bounding box of the particular detected object and an area of the edited region. An additional digital image is edited by applying the properties of the editing operation to a detected object that is depicted in the additional digital image based on a classification of the detected object and a classification of the particular detected object.
    Type: Application
    Filed: February 3, 2022
    Publication date: August 3, 2023
    Applicant: Adobe Inc.
    Inventors: Arnab Sil, Subham Gupta, Anuradha .
  • Publication number: 20230121539
    Abstract: Some implementations include methods for communicating features of images to visually impaired users. An image to be displayed on a touch sensitive screen of a computing device may include one or more objects. Each of the one or more objects may be associated with a bounding box. A contact with the image may be detected via the touch sensitive screen. The contact may be determined to be within a bounding box associated with a first object of the one or more objects. Responsive to detecting the contact to be within the bounding box associated with the first object, a caption of the first object may be caused to become audible and the touch sensitive screen may be caused to vibrate based on a vibration pattern unique to the first object.
    Type: Application
    Filed: October 19, 2021
    Publication date: April 20, 2023
    Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh, Ajay Bedi
  • Publication number: 20220414149
    Abstract: A system identifies a video comprising frames associated with content tags. The system detects features for each frame of the video. The system identifies, based on the detected features, scenes of the video. The system determines, for each frame for each scene, a frame score that indicates a number of content tags that match the other frames within the scene. The system selects, for each scene, a set of key frames that represent the scene based on the determined frame scores. The system receives a search query comprising a keyword. The system generates, for display, search results responsive to the search query including a dynamic preview of the video. The dynamic preview comprises an arrangement of frames of the video corresponding to each scene of the video. Each of the arrangement of frames is selected from the selected set of key frames representing the respective scene of the video.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Patent number: 11514102
    Abstract: Embodiments provide systems, methods, and non-transitory computer storage media for providing search result images based on associations of keywords and depth-levels of an image. In embodiments, depth-levels of an image are identified using depth-map information of the image to identify depth-segments of the image. The depth-segments are analyzed to determine keywords associated with each depth-segment based on objects, features, or content in each depth-segment. An image depth-level data structure is generated by matching keywords generated for the entire image with the keywords at each depth-level and assigning the depth-level to the keyword in the image depth-level data structure for the entire image. The image depth-level data structure may be queried for images that contain keywords and depth-level information that match the keywords and depth-level information specified in a search query.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: November 29, 2022
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Anuradha, Arnab Sil
  • Patent number: 11500927
    Abstract: Certain embodiments involve adaptive search results for multimedia search queries to provide dynamic previews. For instance, a computing system receives a search query that includes a keyword. The computing system identifies, based on the search query, a video file having keyframes with content tags that match the search query. The computing system determines matching scores for respective keyframes of the identified video file. The computing system generates a dynamic preview from at least two keyframes having the highest matching scores.
    Type: Grant
    Filed: October 3, 2019
    Date of Patent: November 15, 2022
    Assignee: Adobe Inc.
    Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Patent number: 11468786
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate dynamic tool-based animated tutorials. In particular, in one or more embodiments, the disclosed systems generate an animated tutorial in response to receiving a request associated with an image editing tool. The disclosed systems then extract steps from existing general tutorials that pertain to the image editing tool to generate tool-specific animated tutorials. In at least one embodiment, the disclosed systems utilize a clustering algorithm in conjunction with image parameters to provide a set of these generated animated tutorials that showcase diverse features and/or attributes of the image editing tool based on measured aesthetic gains resulting from application of the image editing tool within the animated tutorials.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: October 11, 2022
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Publication number: 20220222875
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for generating object-specific-preset edits to be later applied to other digital images depicting a same object type or applying a previously generated object-specific-preset edit to an object of the same object type within a target digital image. For example, in some cases, the disclosed systems generate an object-specific-preset edit by determining a region of a particular localized edit in an edited digital image, identifying an edited object corresponding to the localized edit, and storing in a digital-image-editing document an object tag for the edited object and instructions for the localized edit. In certain implementations, the disclosed systems further apply such an object-specific-preset edit to a target object in a target digital image by determining transformed-positioning parameters for a localized edit from the object-specific-preset edit to the target object.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 14, 2022
    Inventors: Subham Gupta, Arnab Sil, Anuradha .
  • Patent number: 11361526
    Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 14, 2022
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Karki
  • Publication number: 20220108506
    Abstract: Methods, systems, and computer storage media for providing tool tutorials based on tutorial information that is dynamically integrated into tool tutorial shells using graphics editing system operations in a graphics editing systems. In operation, an image is received in association with a graphics editing application. Tool parameters (e.g., image-specific tool parameters) are generated based on processing the image. The tool parameters are generated for a graphics editing tool of the graphics editing application. The graphics editing tool (e.g., object removal tool or spot healing tool) can be a premium version of a simplified version of the graphics editing tool in a freemium application service. Based on the tool parameters and the image, a tool tutorial data file is generated by incorporating the tool parameters and the image into a tool tutorial shell. The tool tutorial data file can be selectively rendered in an integrated interface of the graphics editing application.
    Type: Application
    Filed: October 6, 2020
    Publication date: April 7, 2022
    Inventors: Subham Gupta, Krishna Singh Karki, Poonam Bhalla, Ajay Bedi
  • Patent number: 11276153
    Abstract: Methods and systems are provided for generating auto-complete image suggestions. In embodiments described herein, a user image having an edit state is obtained. An edit state can indicate any edits applied by the user to the user image. For the user image, an auto-complete image suggestion is generated. The auto-complete image suggestion includes a representation of the user image with the user-applied edits as well as a set of supplemental edits. Such supplemental edits can be determined from a pre-edited image identified as similar to the user image.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: March 15, 2022
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Anuradha, Arnab Sil, Shatrunjay Pathare, Mustansir Bartanwala
  • Patent number: 11188784
    Abstract: Methods and systems are provided for determining intelligent people-groups based on relationships between people. In embodiments, a photo dataset is be processed to represent photos of the photo dataset using vectors. These vectors include the importance of people in the photos. The photos are analyzed to determine similarity between the photos. Similarity is indicative of relationships between the photos of the photo dataset. The similarity is based on the people in the photos. The photos are clustered based on the similarity. In clustering the photos, clustering parameters determined from location information associated with the photos of the photo dataset are used.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: November 30, 2021
    Assignee: ADOBE INC.
    Inventors: Subham Gupta, Ajay Bedi
  • Patent number: 11119727
    Abstract: Digital tutorial generation techniques and systems are described in which a digital tutorial is generated automatically and without user intervention. History data is generated describing a sequence of user inputs provided as part of user interaction with an application and audio data is received capturing user utterances, e.g., speech, from a microphone of the computing device. A step-identification module of the tutorial generation system identifies a plurality of tutorial steps based on a sequence of user inputs described by the history data. A segmentation module of the tutorial generation system then generates a plurality of audio segments from the audio data corresponding to respective ones of the plurality of tutorial steps. The digital tutorial is then generated by a synchronization module of the tutorial generation system by synchronizing the plurality of audio segments as part of the plurality of tutorial steps, which is then output.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: September 14, 2021
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Sudhir Tubegere Shankaranarayana, Jaideep Jeyakar, Ashutosh Dwivedi