Patents by Inventor Ajay Bedi

Ajay Bedi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220101578
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating a composite image comprising objects in positions from two or more different digital images. In one or more embodiments, the disclosed system receives a sequence of images and identifies objects within the sequence of images. In one example, the disclosed system determines a target position for a first object based on detecting user selection of the first object in the target position from a first image. The disclosed system can generate a fixed object image comprising the first object in the target position. The disclosed system can generate preview images comprising the fixed object image with the second object sequencing through a plurality of positions as seen in the sequence of images. Based on a second user selection of a desired preview image, the disclosed system can generate the composite image.
    Type: Application
    Filed: September 30, 2020
    Publication date: March 31, 2022
    Inventors: Ajay Bedi, Ajay Jain, Jingwan Lu, Anugrah Prakash, Prasenjit Mondal, Sachin Soni, Sanjeev Tagra
  • Publication number: 20220068258
    Abstract: A media edit point selection process can include a media editing software application programmatically converting speech to text and storing a timestamp-to-text map. The map correlates text corresponding to speech extracted from an audio track for the media clip to timestamps for the media clip. The timestamps correspond to words and some gaps in the speech from the audio track. The probability of identified gaps corresponding to a grammatical pause by the speaker is determined using the timestamp-to-text map and a semantic model. Potential edit points corresponding to grammatical pauses in the speech are stored for display or for additional use by the media editing software application. Text can optionally be displayed to a user during media editing.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 3, 2022
    Inventors: Amol Jindal, Somya Jain, Ajay Bedi
  • Patent number: 11256907
    Abstract: Described herein is a system and techniques for classification of subjects within image information. In some embodiments, a set of subjects may be identified within image data obtained at two different points in time. For each of the subjects in the set of subjects, facial landmark relationships may be assessed at the two different points in time to determine a difference in facial expression. That difference may be compared to a threshold value. Additionally, contours of each of the subjects in the set of subjects may be assessed at the two different points in time to determine a difference in body position. That difference may be compared to a different threshold value. Each of the subjects in the set of subjects may then be classified based on the comparison between the differences and the threshold values.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: February 22, 2022
    Assignee: Adobe Inc.
    Inventors: Sourabh Gupta, Saurabh Gupta, Ajay Bedi
  • Publication number: 20220012278
    Abstract: In implementations of search input generation for an image search, a computing device can capture image data of an environment scene that includes multiple objects. The computing device implements a search input module that can detect the multiple objects in the image data, and initiate a display of a selectable indication for each of the multiple objects. The search input module can then determine a subject object from the detected multiple objects, and generate the subject object as the search input for the image search.
    Type: Application
    Filed: September 24, 2021
    Publication date: January 13, 2022
    Applicant: Adobe Inc.
    Inventors: Ajay Bedi, Sunil Rawat, Rishu Aggarwal
  • Patent number: 11188784
    Abstract: Methods and systems are provided for determining intelligent people-groups based on relationships between people. In embodiments, a photo dataset is be processed to represent photos of the photo dataset using vectors. These vectors include the importance of people in the photos. The photos are analyzed to determine similarity between the photos. Similarity is indicative of relationships between the photos of the photo dataset. The similarity is based on the people in the photos. The photos are clustered based on the similarity. In clustering the photos, clustering parameters determined from location information associated with the photos of the photo dataset are used.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: November 30, 2021
    Assignee: ADOBE INC.
    Inventors: Subham Gupta, Ajay Bedi
  • Patent number: 11176193
    Abstract: In implementations of search input generation for an image search, a computing device can capture image data of an environment scene that includes multiple objects. The computing device implements a search input module that can detect the multiple objects in the image data, and initiate a display of a selectable indication for each of the multiple objects. The search input module can then determine a subject object from the detected multiple objects, and generate the subject object as the search input for the image search.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: November 16, 2021
    Assignee: Adobe Inc.
    Inventors: Ajay Bedi, Sunil Rawat, Rishu Aggarwal
  • Publication number: 20210304456
    Abstract: In some embodiments, contextual image variations are generated for an input image. For example, a contextual composite image depicting a variation is generated based on a input image and a synthetic image component. The synthetic image component includes contextual features of a target object from the input image, such as shading, illumination, or depth that are depicted on the target object. The synthetic image component also includes a pattern from an additional image, such as a fabric pattern. In some cases, a mesh is determined for the target object. Illuminance values are determined for each mesh block. An adjusted mesh is determined based on the illuminance values. The synthetic image component is based on a combination of the adjusted mesh and the pattern from the additional image, such as a depiction of the fabric pattern with stretching, folding, or other contextual features from the target image.
    Type: Application
    Filed: June 10, 2021
    Publication date: September 30, 2021
    Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
  • Publication number: 20210241499
    Abstract: A method of automatically computing tool properties of a virtual paint brush includes obtaining an edge map from a digital image, wherein boundaries of objects in the image are detected, computing contours for the edge map, receiving a brush stroke of the virtual paint brush from a user at a point on a contour, and finding an object boundary that corresponds to the contour that received the brush stroke, computing a tangential slope of the object boundary at the point of the brush stroke, adjusting tool properties of the virtual paint brush based on a change in the tangential slope of the object boundary, and visually displaying the adjusted tool properties dynamically while the user is moving brush strokes around the object boundaries.
    Type: Application
    Filed: February 4, 2020
    Publication date: August 5, 2021
    Inventors: Rishu Aggarwal, Ajay Bedi
  • Publication number: 20210233140
    Abstract: Systems and techniques for a design-aware image search are described. The design-aware image search techniques described herein capture a design on an item to determine additional items with similar or matching designs. An image of the item is used to create an edge image of a design, and shape descriptors are generated describing features of the edge image. These shape descriptors are compared to shape descriptors associated with other images to locate images that have similar or matching designs as compared with the input image. The design-aware image search system may uses these relationships to generate a search result with images or products having a design similar to the design on the input image.
    Type: Application
    Filed: January 29, 2020
    Publication date: July 29, 2021
    Applicant: Adobe Inc.
    Inventors: Rishu Aggarwal, Sunil Rawat, Ajay Bedi
  • Patent number: 11074676
    Abstract: An eye correction system and related techniques are described herein. The eye correction system can automatically detect and correct one or more misaligned eyes in an image. For example, an image of a person can be analyzed to determine whether one or both eyes of the person are misaligned with respect to each other and/or with respect to the entire face of the person. If an eye is determined to be misaligned, the image can be modified so that the eye is adjusted accordingly. For example, an iris of a misaligned eye can be adjusted to align with the face and/or the other eye.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: July 27, 2021
    Assignee: Adobe Inc.
    Inventors: Sunil Rawat, Rishu Aggarwal, Ajay Bedi
  • Patent number: 11069093
    Abstract: In some embodiments, contextual image variations are generated for an input image. For example, a contextual composite image depicting a variation is generated based on a input image and a synthetic image component. The synthetic image component includes contextual features of a target object from the input image, such as shading, illumination, or depth that are depicted on the target object. The synthetic image component also includes a pattern from an additional image, such as a fabric pattern. In some cases, a mesh is determined for the target object. Illuminance values are determined for each mesh block. An adjusted mesh is determined based on the illuminance values. The synthetic image component is based on a combination of the adjusted mesh and the pattern from the additional image, such as a depiction of the fabric pattern with stretching, folding, or other contextual features from the target image.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: July 20, 2021
    Assignee: ADOBE INC.
    Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
  • Publication number: 20210149947
    Abstract: Embodiments of the present invention are directed towards providing contextual tags for an image based on a contextual analysis of associated images captured in the same environment as the image. To determine contextual tags, content tags can be determined for images. The determined content tags can be associated with categories based on a contextual classification of the content tags. These associated content tags can then be designated as contextual tags for a respective category. To associate these contextual tags with the images, the images can be iterated through based on how the images relate to the contextual tags. For instance, when an image is associated with a category, the contextual tags classified into that category can be assigned to that image.
    Type: Application
    Filed: November 14, 2019
    Publication date: May 20, 2021
    Inventors: Subham GUPTA, Poonam BHALLA, Krishna Singh KARKI, Ajay BEDI
  • Publication number: 20210141867
    Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods that can generate contextual identifiers indicating context for frames of a video and utilize those contextual identifiers to generate translations of text corresponding to such video frames. By analyzing a digital video file, the disclosed systems can identify video frames corresponding to a scene and a term sequence corresponding to a subset of the video frames. Based on images features of the video frames corresponding to the scene, the disclosed systems can utilize a contextual neural network to generate a contextual identifier (e.g. a contextual tag) indicating context for the video frames. Based on the contextual identifier, the disclosed systems can subsequently apply a translation neural network to generate a translation of the term sequence from a source language to a target language. In some cases, the translation neural network also generates affinity scores for the translation.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 13, 2021
    Inventors: Mahika Wason, Amol Jindal, Ajay Bedi
  • Publication number: 20210133861
    Abstract: Digital image ordering based on object position and aesthetics is leveraged in a digital medium environment. According to various implementations, an image analysis system is implemented to identify visual objects in digital images and determine aesthetics attributes of the digital images. The digital images can then be arranged in way that prioritizes digital images that include relevant visual objects and that exhibit optimum visual aesthetics.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 6, 2021
    Applicant: Adobe Inc.
    Inventors: Vikas Kumar, Sourabh Gupta, Nandan Jha, Ajay Bedi
  • Patent number: 10998007
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can generate a context-aware-video-progress bar including a video-scene-proportionate timeline with time-interval sections sized according to relative scene proportions within time intervals of a video. In some implementations, for instance, the disclosed systems determine relative proportions of scenes within a video across time intervals of the video and generate a video-scene-proportionate timeline comprising time-interval sections sized proportionate to the relative proportions of scenes across the time intervals. By integrating the video-scene-proportionate timeline within a video-progress bar, the disclosed systems generate a context-aware-video-progress bar for a video.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: May 4, 2021
    Assignee: ADOBE INC.
    Inventors: Ajay Bedi, Amol Jindal
  • Publication number: 20210124774
    Abstract: In implementations of search input generation for an image search, a computing device can capture image data of an environment scene that includes multiple objects. The computing device implements a search input module that can detect the multiple objects in the image data, and initiate a display of a selectable indication for each of the multiple objects. The search input module can then determine a subject object from the detected multiple objects, and generate the subject object as the search input for the image search.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 29, 2021
    Applicant: Adobe Inc.
    Inventors: Ajay Bedi, Sunil Rawat, Rishu Aggarwal
  • Publication number: 20210124911
    Abstract: Described herein is a system and techniques for classification of subjects within image information. In some embodiments, a set of subjects may be identified within image data obtained at two different points in time. For each of the subjects in the set of subjects, facial landmark relationships may be assessed at the two different points in time to determine a difference in facial expression. That difference may be compared to a threshold value. Additionally, contours of each of the subjects in the set of subjects may be assessed at the two different points in time to determine a difference in body position. That difference may be compared to a different threshold value. Each of the subjects in the set of subjects may then be classified based on the comparison between the differences and the threshold values.
    Type: Application
    Filed: October 25, 2019
    Publication date: April 29, 2021
    Inventors: Sourabh Gupta, Saurabh Gupta, Ajay Bedi
  • Publication number: 20210118325
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate dynamic tool-based animated tutorials. In particular, in one or more embodiments, the disclosed systems generate an animated tutorial in response to receiving a request associated with an image editing tool. The disclosed systems then extract steps from existing general tutorials that pertain to the image editing tool to generate tool-specific animated tutorials. In at least one embodiment, the disclosed systems utilize a clustering algorithm in conjunction with image parameters to provide a set of these generated animated tutorials that showcase diverse features and/or attributes of the image editing tool based on measured aesthetic gains resulting from application of the image editing tool within the animated tutorials.
    Type: Application
    Filed: October 16, 2019
    Publication date: April 22, 2021
    Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Publication number: 20210103615
    Abstract: Certain embodiments involve adaptive search results for multimedia search queries to provide dynamic previews. For instance, a computing system receives a search query that includes a keyword. The computing system identifies, based on the search query, a video file having keyframes with content tags that match the search query. The computing system determines matching scores for respective keyframes of the identified video file. The computing system generates a dynamic preview from at least two keyframes having the highest matching scores.
    Type: Application
    Filed: October 3, 2019
    Publication date: April 8, 2021
    Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
  • Publication number: 20210098026
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can generate a context-aware-video-progress bar including a video-scene-proportionate timeline with time-interval sections sized according to relative scene proportions within time intervals of a video. In some implementations, for instance, the disclosed systems determine relative proportions of scenes within a video across time intervals of the video and generate a video-scene-proportionate timeline comprising time-interval sections sized proportionate to the relative proportions of scenes across the time intervals. By integrating the video-scene-proportionate timeline within a video-progress bar, the disclosed systems generate a context-aware-video-progress bar for a video.
    Type: Application
    Filed: September 30, 2019
    Publication date: April 1, 2021
    Inventors: Ajay Bedi, Amol Jindal