Patents by Inventor Ajay Bedi

Ajay Bedi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10945040
    Abstract: The present disclosure relates to methods, systems, and non-transitory computer-readable media for generating a topic visual element for a portion of a digital video based on audio content and visual content of the digital video. For example, the disclosed systems can generate a map between words of the audio content and their corresponding timestamps from the digital video and then modify the map by associating importance weights with one or more of the words. Further, the disclosed systems can generate an additional map by associating words embedded in one or more video frames of the visual content with their corresponding timestamps. Based on these maps, the disclosed systems can identify a topic for a portion of the digital video (e.g., a portion currently previewed on a computing device), generate a topic visual element that includes the topic, and provide the topic visual element for display on a computing device.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: March 9, 2021
    Assignee: ADOBE INC.
    Inventors: Ajay Bedi, Sunil Rawat, Rishu Aggarwal
  • Publication number: 20210067701
    Abstract: A system obtains an image from a digital image stream captured by an imaging component. Both a foreground region of interest and a background region of interest present in the obtained image are identified, and the imaging component is zoomed out as appropriate to maintain a margin (a number of pixels) around both the foreground region of interest and the background region of interest. Additionally, a position of regions of interest (e.g., the background region of interest and the foreground region of interest) to improve the composition or aesthetics of the image is determined, and a composition score of the image indicating how good the determined position is from an aesthetics point of view is determined. A zoom adjustment value is determined based on the position of the regions of interest, and the imaging component is caused to zoom in or out in accordance with the zoom adjustment value.
    Type: Application
    Filed: August 27, 2019
    Publication date: March 4, 2021
    Applicant: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Ajay Bedi
  • Patent number: 10939044
    Abstract: A system obtains an image from a digital image stream captured by an imaging component. Both a foreground region of interest and a background region of interest present in the obtained image are identified, and the imaging component is zoomed out as appropriate to maintain a margin (a number of pixels) around both the foreground region of interest and the background region of interest. Additionally, a position of regions of interest (e.g., the background region of interest and the foreground region of interest) to improve the composition or aesthetics of the image is determined, and a composition score of the image indicating how good the determined position is from an aesthetics point of view is determined. A zoom adjustment value is determined based on the position of the regions of interest, and the imaging component is caused to zoom in or out in accordance with the zoom adjustment value.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: March 2, 2021
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Ajay Bedi
  • Publication number: 20210056667
    Abstract: An eye correction system and related techniques are described herein. The eye correction system can automatically detect and correct one or more misaligned eyes in an image. For example, an image of a person can be analyzed to determine whether one or both eyes of the person are misaligned with respect to each other and/or with respect to the entire face of the person. If an eye is determined to be misaligned, the image can be modified so that the eye is adjusted accordingly. For example, an iris of a misaligned eye can be adjusted to align with the face and/or the other eye.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 25, 2021
    Inventors: Sunil Rawat, Rishu Aggarwal, Ajay Bedi
  • Patent number: 10929684
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for intelligently merging handwritten content and digital audio from a digital video based on monitored presentation flow. In particular, the disclosed systems can apply an edge detection algorithm to intelligently detect distinct sections of the digital video and locations of handwritten content entered onto a writing surface over time. Moreover, the disclosed systems can generate a transcription of handwritten content utilizing digital audio. For instance, the disclosed systems can utilize an audio text transcript as input to an optical character recognition algorithm and auto-correct text utilizing the audio text transcript. Further, the disclosed systems can analyze short form text from handwritten script and generate long form text from audio text transcripts.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: February 23, 2021
    Assignee: ADOBE INC.
    Inventors: Pooja, Sourabh Gupta, Ajay Bedi
  • Publication number: 20210012144
    Abstract: Methods and systems are provided for determining intelligent people-groups based on relationships between people. In embodiments, a photo dataset is be processed to represent photos of the photo dataset using vectors. These vectors include the importance of people in the photos. The photos are analyzed to determine similarity between the photos. Similarity is indicative of relationships between the photos of the photo dataset. The similarity is based on the people in the photos. The photos are clustered based on the similarity. In clustering the photos, clustering parameters determined from location information associated with the photos of the photo dataset are used.
    Type: Application
    Filed: July 12, 2019
    Publication date: January 14, 2021
    Inventors: Subham Gupta, Ajay Bedi
  • Publication number: 20210012468
    Abstract: Systems and methods for removing objects from images are disclosed. An image processing application identifies a boundary of each object of a set of objects in an image. The image processing application identifies a completed boundary for each object of the set of objects by providing the object to a trained model. The image processing application determines a set of masks. Each mask corresponds to an object of the set of objects and represents a region of the image defined by an intersection of the boundary of the object and the boundary of a target object to be removed from the image. The image processing application updates each mask by separately performing content filling on the corresponding region. The image processing application creates an output image by merging each of the updated masks with portions of the image.
    Type: Application
    Filed: September 30, 2020
    Publication date: January 14, 2021
    Inventors: Sanjeev Tagra, Ajay Jain, Sachin Soni, Ajay Bedi
  • Patent number: 10878024
    Abstract: Dynamic thumbnails are described. Dynamic thumbnails provide a convenient and automated approach for providing thumbnails that are contextually relevant to a user. In at least some implementations, an input image is analyzed to generate tags describing objects or points of interest within the image, and to generate rectangles that describe the locations within the image that correspond to the generated tags. Various combinations of generated tags are analyzed to determine the smallest bounding rectangle that contains every rectangle associated with the tags in the respective combination, and a thumbnail is created. A user input is received and compared to the tags associated with the generated thumbnails, and a thumbnail that is most relevant to the user input is selected and output to the user.
    Type: Grant
    Filed: April 20, 2017
    Date of Patent: December 29, 2020
    Assignee: Adobe Inc.
    Inventors: Srijan Sandilya, Vikas Kumar, Sourabh Gupta, Nandan Jha, Ajay Bedi
  • Publication number: 20200401831
    Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.
    Type: Application
    Filed: September 4, 2020
    Publication date: December 24, 2020
    Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Karki
  • Patent number: 10872232
    Abstract: Systems and methods disclosed herein for providing people categorization within electronic images. One embodiment involves retrieving an input image from memory. The embodiment further involves determining one or more facial regions in the input image using a facial detection algorithm. The embodiment further involves identifying a number of features for each facial region. The embodiment further involves determining a subject face and a crowd face in the input image based on at least the number of features for the subject face being more than the number of features for the crowd face. The embodiment further involves displaying, on a display device, the subject face identified within the image.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: December 22, 2020
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Ajay Bedi
  • Publication number: 20200364463
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for intelligently merging handwritten content and digital audio from a digital video based on monitored presentation flow. In particular, the disclosed systems can apply an edge detection algorithm to intelligently detect distinct sections of the digital video and locations of handwritten content entered onto a writing surface over time. Moreover, the disclosed systems can generate a transcription of handwritten content utilizing digital audio. For instance, the disclosed systems can utilize an audio text transcript as input to an optical character recognition algorithm and auto-correct text utilizing the audio text transcript. Further, the disclosed systems can analyze short form text from handwritten script and generate long form text from audio text transcripts.
    Type: Application
    Filed: May 17, 2019
    Publication date: November 19, 2020
    Inventors: Pooja, Sourabh Gupta, Ajay Bedi
  • Patent number: 10825148
    Abstract: Systems and methods for removing objects from images are disclosed. An image processing application identifies a boundary of each object of a set of objects in an image. In some cases, the identification uses deep learning. The image processing application identifies a completed boundary for each object of the set of objects by providing the object to a trained model. The image processing application determines a set of masks. Each mask corresponds to an object of the set of objects and represents a region of the image defined by an intersection of the boundary of the object and the boundary of a target object to be removed from the image. The image processing application updates each mask by separately performing content filling on the corresponding region. The image processing application creates an output image by merging each of the updated masks with portions of the image.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: November 3, 2020
    Assignee: Adobe Inc.
    Inventors: Sanjeev Tagra, Ajay Jain, Sachin Soni, Ajay Bedi
  • Publication number: 20200342635
    Abstract: In some embodiments, contextual image variations are generated for an input image. For example, a contextual composite image depicting a variation is generated based on a input image and a synthetic image component. The synthetic image component includes contextual features of a target object from the input image, such as shading, illumination, or depth that are depicted on the target object. The synthetic image component also includes a pattern from an additional image, such as a fabric pattern. In some cases, a mesh is determined for the target object. Illuminance values are determined for each mesh block. An adjusted mesh is determined based on the illuminance values. The synthetic image component is based on a combination of the adjusted mesh and the pattern from the additional image, such as a depiction of the fabric pattern with stretching, folding, or other contextual features from the target image.
    Type: Application
    Filed: April 26, 2019
    Publication date: October 29, 2020
    Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
  • Patent number: 10817739
    Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: October 27, 2020
    Assignee: Adobe Inc.
    Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Kari
  • Publication number: 20200320761
    Abstract: Structural modifications to a person's face in a reference image are captured and automatically applied to the person's face in another image. The reference image is processed to compute landmark information on the person's face and apply a mesh to the reference image. When structural modifications are made to the person's face in the reference image, the mesh is modified, and the modified mesh is stored in association with the landmark information. Another image is analyzed to compute landmark information on the person's face in that image and apply a mesh to the image. A transformation matrix is computed using the landmark information from the reference image and current image, and the modified mesh from the reference image is transformed using the transformation matrix. The mesh in the current image is modified using the transformed mesh, thereby applying the structural modification to the person's face in the current image.
    Type: Application
    Filed: April 5, 2019
    Publication date: October 8, 2020
    Inventors: Rekha Agarwal, Sunil Rawat, Rishu Aggarwal, Ajay Bedi
  • Patent number: 10755087
    Abstract: Methods and systems are provided for performing automated capture of images based on emotion detection. In embodiments, a selection of an emotion class from among a set of emotion classes presented, via a graphical user interface, is received. The emotion class can indicate an emotion exhibited by a subject desired to be captured in an image. A set of images corresponding with a video is analyzed to identify at least one image in which a subject exhibits the emotion associated with the selected emotion class. The set of images can be analyzed using at least one neural network that classifies images in association with emotion exhibited in the images. Thereafter, the image can be presented in association with the selected emotion class.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: August 25, 2020
    Assignee: ADOBE INC.
    Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
  • Publication number: 20200250453
    Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.
    Type: Application
    Filed: January 31, 2019
    Publication date: August 6, 2020
    Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Kari
  • Publication number: 20200236421
    Abstract: A seek content extraction system analyzes frames of video content and identifies locations in the frames where session information is displayed. This session information refers to information that is displayed as part of video content and that describes, for a particular location in the video content, what is currently happening in the video content at that particular location. This session information is extracted from each of multiple frames, and for a given frame the extracted session information is associated with the frame. While the user is seeking forward or backward through the video content, a thumbnail of the frame at a given location in the video content is displayed along with the extracted session information associated with the frame.
    Type: Application
    Filed: January 21, 2019
    Publication date: July 23, 2020
    Applicant: Adobe Inc.
    Inventors: Amol Jindal, Ajay Bedi
  • Patent number: 10701434
    Abstract: A seek content extraction system analyzes frames of video content and identifies locations in the frames where session information is displayed. This session information refers to information that is displayed as part of video content and that describes, for a particular location in the video content, what is currently happening in the video content at that particular location. This session information is extracted from each of multiple frames, and for a given frame the extracted session information is associated with the frame. While the user is seeking forward or backward through the video content, a thumbnail of the frame at a given location in the video content is displayed along with the extracted session information associated with the frame.
    Type: Grant
    Filed: January 21, 2019
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Amol Jindal, Ajay Bedi
  • Publication number: 20200175654
    Abstract: Systems and methods for removing objects from images are disclosed. An image processing application identifies a boundary of each object of a set of objects in an image. In some cases, the identification uses deep learning. The image processing application identifies a completed boundary for each object of the set of objects by providing the object to a trained model. The image processing application determines a set of masks. Each mask corresponds to an object of the set of objects and represents a region of the image defined by an intersection of the boundary of the object and the boundary of a target object to be removed from the image. The image processing application updates each mask by separately performing content filling on the corresponding region. The image processing application creates an output image by merging each of the updated masks with portions of the image.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Inventors: Sanjeev Tagra, Ajay Jain, Sachin Soni, Ajay Bedi