Patents by Inventor Sourabh Gupta
Sourabh Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11087514Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.Type: GrantFiled: June 11, 2019Date of Patent: August 10, 2021Assignee: Adobe Inc.Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
-
Patent number: 11069093Abstract: In some embodiments, contextual image variations are generated for an input image. For example, a contextual composite image depicting a variation is generated based on a input image and a synthetic image component. The synthetic image component includes contextual features of a target object from the input image, such as shading, illumination, or depth that are depicted on the target object. The synthetic image component also includes a pattern from an additional image, such as a fabric pattern. In some cases, a mesh is determined for the target object. Illuminance values are determined for each mesh block. An adjusted mesh is determined based on the illuminance values. The synthetic image component is based on a combination of the adjusted mesh and the pattern from the additional image, such as a depiction of the fabric pattern with stretching, folding, or other contextual features from the target image.Type: GrantFiled: April 26, 2019Date of Patent: July 20, 2021Assignee: ADOBE INC.Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
-
Publication number: 20210133861Abstract: Digital image ordering based on object position and aesthetics is leveraged in a digital medium environment. According to various implementations, an image analysis system is implemented to identify visual objects in digital images and determine aesthetics attributes of the digital images. The digital images can then be arranged in way that prioritizes digital images that include relevant visual objects and that exhibit optimum visual aesthetics.Type: ApplicationFiled: November 4, 2019Publication date: May 6, 2021Applicant: Adobe Inc.Inventors: Vikas Kumar, Sourabh Gupta, Nandan Jha, Ajay Bedi
-
Publication number: 20210124911Abstract: Described herein is a system and techniques for classification of subjects within image information. In some embodiments, a set of subjects may be identified within image data obtained at two different points in time. For each of the subjects in the set of subjects, facial landmark relationships may be assessed at the two different points in time to determine a difference in facial expression. That difference may be compared to a threshold value. Additionally, contours of each of the subjects in the set of subjects may be assessed at the two different points in time to determine a difference in body position. That difference may be compared to a different threshold value. Each of the subjects in the set of subjects may then be classified based on the comparison between the differences and the threshold values.Type: ApplicationFiled: October 25, 2019Publication date: April 29, 2021Inventors: Sourabh Gupta, Saurabh Gupta, Ajay Bedi
-
Patent number: 10929684Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for intelligently merging handwritten content and digital audio from a digital video based on monitored presentation flow. In particular, the disclosed systems can apply an edge detection algorithm to intelligently detect distinct sections of the digital video and locations of handwritten content entered onto a writing surface over time. Moreover, the disclosed systems can generate a transcription of handwritten content utilizing digital audio. For instance, the disclosed systems can utilize an audio text transcript as input to an optical character recognition algorithm and auto-correct text utilizing the audio text transcript. Further, the disclosed systems can analyze short form text from handwritten script and generate long form text from audio text transcripts.Type: GrantFiled: May 17, 2019Date of Patent: February 23, 2021Assignee: ADOBE INC.Inventors: Pooja, Sourabh Gupta, Ajay Bedi
-
Patent number: 10878024Abstract: Dynamic thumbnails are described. Dynamic thumbnails provide a convenient and automated approach for providing thumbnails that are contextually relevant to a user. In at least some implementations, an input image is analyzed to generate tags describing objects or points of interest within the image, and to generate rectangles that describe the locations within the image that correspond to the generated tags. Various combinations of generated tags are analyzed to determine the smallest bounding rectangle that contains every rectangle associated with the tags in the respective combination, and a thumbnail is created. A user input is received and compared to the tags associated with the generated thumbnails, and a thumbnail that is most relevant to the user input is selected and output to the user.Type: GrantFiled: April 20, 2017Date of Patent: December 29, 2020Assignee: Adobe Inc.Inventors: Srijan Sandilya, Vikas Kumar, Sourabh Gupta, Nandan Jha, Ajay Bedi
-
Publication number: 20200394828Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.Type: ApplicationFiled: June 11, 2019Publication date: December 17, 2020Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
-
Publication number: 20200364463Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for intelligently merging handwritten content and digital audio from a digital video based on monitored presentation flow. In particular, the disclosed systems can apply an edge detection algorithm to intelligently detect distinct sections of the digital video and locations of handwritten content entered onto a writing surface over time. Moreover, the disclosed systems can generate a transcription of handwritten content utilizing digital audio. For instance, the disclosed systems can utilize an audio text transcript as input to an optical character recognition algorithm and auto-correct text utilizing the audio text transcript. Further, the disclosed systems can analyze short form text from handwritten script and generate long form text from audio text transcripts.Type: ApplicationFiled: May 17, 2019Publication date: November 19, 2020Inventors: Pooja, Sourabh Gupta, Ajay Bedi
-
Publication number: 20200342635Abstract: In some embodiments, contextual image variations are generated for an input image. For example, a contextual composite image depicting a variation is generated based on a input image and a synthetic image component. The synthetic image component includes contextual features of a target object from the input image, such as shading, illumination, or depth that are depicted on the target object. The synthetic image component also includes a pattern from an additional image, such as a fabric pattern. In some cases, a mesh is determined for the target object. Illuminance values are determined for each mesh block. An adjusted mesh is determined based on the illuminance values. The synthetic image component is based on a combination of the adjusted mesh and the pattern from the additional image, such as a depiction of the fabric pattern with stretching, folding, or other contextual features from the target image.Type: ApplicationFiled: April 26, 2019Publication date: October 29, 2020Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
-
Patent number: 10755087Abstract: Methods and systems are provided for performing automated capture of images based on emotion detection. In embodiments, a selection of an emotion class from among a set of emotion classes presented, via a graphical user interface, is received. The emotion class can indicate an emotion exhibited by a subject desired to be captured in an image. A set of images corresponding with a video is analyzed to identify at least one image in which a subject exhibits the emotion associated with the selected emotion class. The set of images can be analyzed using at least one neural network that classifies images in association with emotion exhibited in the images. Thereafter, the image can be presented in association with the selected emotion class.Type: GrantFiled: October 25, 2018Date of Patent: August 25, 2020Assignee: ADOBE INC.Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
-
Patent number: 10650290Abstract: Sketch completion using machine learning in a digital medium environment is described. Initially, a user sketches a digital image, e.g., using a stylus in a digital sketch application. A model trained using machine learning is leveraged to identify and describe visual characteristics of the user sketch. The visual characteristics describing the user sketch are compared to clusters of data generated by the model and that describe visual characteristics of a set of digital sketch images. Based on the comparison, digital sketch images having visual characteristics similar to the user sketch are identified from similar clusters. The similar images are returned for presentation as selectable suggestions for sketch completion of the sketched object.Type: GrantFiled: June 4, 2018Date of Patent: May 12, 2020Assignee: Adobe Inc.Inventors: Piyush Singh, Vikas Kumar, Sourabh Gupta, Nandan Jha, Nishchey Arya, Rachit Gupta
-
Publication number: 20200134296Abstract: Methods and systems are provided for performing automated capture of images based on emotion detection. In embodiments, a selection of an emotion class from among a set of emotion classes presented, via a graphical user interface, is received. The emotion class can indicate an emotion exhibited by a subject desired to be captured in an image. A set of images corresponding with a video is analyzed to identify at least one image in which a subject exhibits the emotion associated with the selected emotion class. The set of images can be analyzed using at least one neural network that classifies images in association with emotion exhibited in the images. Thereafter, the image can be presented in association with the selected emotion class.Type: ApplicationFiled: October 25, 2018Publication date: April 30, 2020Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
-
Patent number: 10582419Abstract: A mobile communication device generates a respective request (such as a wireless communication) to access a network. An access point supporting communications over multiple carrier frequency bands receives the request from the mobile communication device to establish a wireless connection. A connection manager associated with the access point analyzes current load conditions associated with other mobile communication devices communicating with the access point over the multiple carrier frequency bands. Based at least in part on the current load conditions, the connection manager selects a carrier frequency band from the multiple carrier frequency bands. The connection manager initiates notification to the mobile communication device to connect to the access point using the selected carrier frequency band.Type: GrantFiled: July 22, 2014Date of Patent: March 3, 2020Assignee: Time Warner Cable Enterprises LLCInventors: Hussain Zaheer Syed, Praveen C. Srivistava, Rajesh M. Gangadhar, Sourabh Gupta
-
Patent number: 10573033Abstract: An image editing application selectively edits a brushstroke in an image, based on a direction of the brushstroke. In some cases, the brushstroke is selectively edited based on a similarity between the direction of the brushstroke and a direction of an editing tool. Additionally or alternatively, directional data for each pixel of the brushstroke is compared to directional data for each position of the editing tool. Data structures capable of storing directional information for one or more of a pixel, a brushstroke, or a motion of an editing tool are disclosed.Type: GrantFiled: December 19, 2017Date of Patent: February 25, 2020Assignee: Adobe Inc.Inventors: Sourabh Gupta, Ajay Bedi
-
Patent number: 10572751Abstract: Techniques for converting mechanical markings on hardcopy textual content into digital annotations in a digital document file. In accordance with some embodiments, the techniques include identifying at least one block of text in a digital (scanned) image of a hardcopy document, and identifying at least one mechanical marking in the digital image of the hardcopy document. The mechanical marking, such as an underline, strike-through, highlight or circle, covers or lies adjacent to the corresponding block of text. An annotated digital document file is generated from the digital image of the hardcopy document. The annotated digital document file includes computer-executable instructions representing the original text of the hardcopy document and at least one annotation corresponding to the mechanical marking in the hardcopy document.Type: GrantFiled: March 1, 2017Date of Patent: February 25, 2020Assignee: ADOBE INC.Inventors: Vijay Kumar Sharma, Sourabh Gupta, Ajay Bedi
-
Publication number: 20190370617Abstract: Sketch completion using machine learning in a digital medium environment is described. Initially, a user sketches a digital image, e.g., using a stylus in a digital sketch application. A model trained using machine learning is leveraged to identify and describe visual characteristics of the user sketch. The visual characteristics describing the user sketch are compared to clusters of data generated by the model and that describe visual characteristics of a set of digital sketch images. Based on the comparison, digital sketch images having visual characteristics similar to the user sketch are identified from similar clusters. The similar images are returned for presentation as selectable suggestions for sketch completion of the sketched object.Type: ApplicationFiled: June 4, 2018Publication date: December 5, 2019Applicant: Adobe Inc.Inventors: Piyush Singh, Vikas Kumar, Sourabh Gupta, Nandan Jha, Nishchey Arya, Rachit Gupta
-
Patent number: 10467739Abstract: A user identifies an unwanted object in a source image. Related images are identified on the basis of timestamp and/or geolocation metadata. Matching masks are identified in the source image, wherein each of the matching masks is adjacent to the selection mask. Features in the selection and matching masks which also appear in one of the related images are identified. The related image having a maximum of features which are tracked to a source image matching mask, but also a minimum of features which are tracked to the source image selection mask, is identified as a best-match related image. By mapping the source image matching masks onto the best-match related image, a seed region can be located in the best-match related image. This seed region is used for filling in the source image. This allows the unwanted object to be replaced with a visually plausible background having a reasonable appearance.Type: GrantFiled: June 14, 2017Date of Patent: November 5, 2019Assignee: Adobe Inc.Inventors: Ajay Bedi, Sourabh Gupta, Saurabh Gupta
-
Patent number: 10462359Abstract: In implementations of image composition instruction based on reference image perspective, a template image of a desired object is captured. Reference images including a similar or same object as the desired object are obtained. Directions from a first location of where a template image was captured to a second location where a selected reference image was captured are obtained and exposed in a user interface to allow a user to move to the second location. At the second location, interactive instructions are generated based on a live video stream of the desired to move a camera capturing the video stream until a composition of the video stream is aligned with a composition of the reference image. The camera is configured with settings based on the reference image to capture a composed image having a same composition as the reference image.Type: GrantFiled: April 13, 2018Date of Patent: October 29, 2019Assignee: Adobe Inc.Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
-
Publication number: 20190320113Abstract: In implementations of image composition instruction based on reference image perspective, a template image of a desired object is captured. Reference images including a similar or same object as the desired object are obtained. Directions from a first location of where a template image was captured to a second location where a selected reference image was captured are obtained and exposed in a user interface to allow a user to move to the second location. At the second location, interactive instructions are generated based on a live video stream of the desired to move a camera capturing the video stream until a composition of the video stream is aligned with a composition of the reference image. The camera is configured with settings based on the reference image to capture a composed image having a same composition as the reference image.Type: ApplicationFiled: April 13, 2018Publication date: October 17, 2019Applicant: Adobe Inc.Inventors: Tushar Rajvanshi, Sourabh Gupta, Ajay Bedi
-
Patent number: 10430920Abstract: Content-conforming stamp tool techniques are described. In one or more embodiments, a selection of an object in a digital image is received. An indication of a location in the digital image where the object is to be reproduced is also received. To reproduce and conform the object at the reproduction location, adjustments to a shape of the object are computed to conform a reproduction of the object to image content proximate the reproduction location. The adjustments are computed based on both the geometry of the image content at the source location and the geometry of the image content at the reproduction location. The adjustments are then applied to the shape of the object when it is reproduced at the reproduction location.Type: GrantFiled: April 7, 2015Date of Patent: October 1, 2019Assignee: Adobe Inc.Inventors: Ajay Bedi, Sourabh Gupta, Saurabh Gupta