Patents by Inventor Angad Kumar Gupta
Angad Kumar Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230353701Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for removing objects from an image stream at capture time of a digital image. For example, the disclosed system contemporaneously detects and segments objects from a digital image stream being previewed in a camera viewfinder graphical user interface of a client device. The disclosed system removes selected objects from the image stream and fills a hole left by the removed object with a content aware fill. Moreover, the disclosed system displays the image stream with the removed object and content fill as the image stream is previewed by a user prior to capturing a digital image from the image stream.Type: ApplicationFiled: April 27, 2022Publication date: November 2, 2023Inventors: Sankalp Shukla, Angad Kumar Gupta, Sourabh Gupta
-
Patent number: 11682149Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilize simultaneous, multi-mesh deformation to implement edge aware transformations of digital images. In particular, in one or more embodiments, the disclosed systems generates a transformation handle that targets an edge portrayed in a digital image. In some cases, the disclosed systems provide the transformation handle for display over the digital image. Additionally, in one or more embodiments, the disclosed systems generate vectors splines and meshes for the edge and one or more influenced regions adjacent to the edge. In response to detecting a user interaction with the transformation handle, the disclosed systems can modify the edge and the at least one influenced region by modifying the corresponding vector splines and meshes.Type: GrantFiled: February 9, 2021Date of Patent: June 20, 2023Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Ashwani Chandil
-
Patent number: 11593979Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for implementing part-level semantic aware transformations when editing digital images. For example, the disclosed systems identify a user selection designating an active region of a subpart (e.g., an object portion) to modify in a shape-constrained manner. Additionally, in certain implementations, the disclosed systems identify another user selection to designate an influenced region comprising adjoining areas connected to the active region. In some embodiments, the disclosed systems generate a boundary vector path outlining the active region and the influenced region. Furthermore, the disclosed systems can determine transformation constraints corresponding to specific path segments of the boundary vector path.Type: GrantFiled: April 28, 2021Date of Patent: February 28, 2023Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Ashwani Chandil
-
Patent number: 11574388Abstract: Methods, systems, and non-transitory computer readable media are disclosed for automatically, accurately, and efficiently correcting eye region artifacts including dark eye regions and wrinkles within a digital image portraying a human face. In particular, in one or more embodiments the disclosed systems localize areas within a digital image to identify eye region artifacts including dark eye regions and wrinkles. In one or more embodiments, the disclosed systems generate a corrected color image by correcting dark eye regions in a low frequency layer of the digital image by replacing the dark eye regions with candidate eye regions. Furthermore, in one or more embodiments the disclosed systems generate a corrected texture image by correcting wrinkles in a high frequency layer by processing the digital image utilizing a smoothing algorithm. The disclosed systems further generate a corrected digital image by combining the corrected color image and the corrected texture image.Type: GrantFiled: December 29, 2020Date of Patent: February 7, 2023Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Sourabh Gupta
-
Publication number: 20220366623Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for implementing part-level semantic aware transformations when editing digital images. For example, the disclosed systems identify a user selection designating an active region of a subpart (e.g., an object portion) to modify in a shape-constrained manner. Additionally, in certain implementations, the disclosed systems identify another user selection to designate an influenced region comprising adjoining areas connected to the active region. In some embodiments, the disclosed systems generate a boundary vector path outlining the active region and the influenced region. Furthermore, the disclosed systems can determine transformation constraints corresponding to specific path segments of the boundary vector path.Type: ApplicationFiled: April 28, 2021Publication date: November 17, 2022Inventors: Angad Kumar Gupta, Ashwani Chandil
-
Patent number: 11475544Abstract: Methods and systems are provided for facilitating automated braces removal from individuals in images. In embodiments described herein, an indication to remove the braces from an individual wearing braces in an image is obtained. Based on receiving the indication to remove the braces, automatically, without user intervention, a teeth region is identified in the image that includes teeth of the individual, and a braces region is identified that includes braces visible in the teeth region. The teeth region and braces region are used to generate an edited image that includes the individual without braces.Type: GrantFiled: October 16, 2020Date of Patent: October 18, 2022Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Sourabh Gupta
-
Publication number: 20220254078Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilize simultaneous, multi-mesh deformation to implement edge aware transformations of digital images. In particular, in one or more embodiments, the disclosed systems generates a transformation handle that targets an edge portrayed in a digital image. In some cases, the disclosed systems provide the transformation handle for display over the digital image. Additionally, in one or more embodiments, the disclosed systems generate vectors splines and meshes for the edge and one or more influenced regions adjacent to the edge. In response to detecting a user interaction with the transformation handle, the disclosed systems can modify the edge and the at least one influenced region by modifying the corresponding vector splines and meshes.Type: ApplicationFiled: February 9, 2021Publication date: August 11, 2022Inventors: Angad Kumar Gupta, Ashwani Chandil
-
Publication number: 20220207662Abstract: Methods, systems, and non-transitory computer readable media are disclosed for automatically, accurately, and efficiently correcting eye region artifacts including dark eye regions and wrinkles within a digital image portraying a human face. In particular, in one or more embodiments the disclosed systems localize areas within a digital image to identify eye region artifacts including dark eye regions and wrinkles. In one or more embodiments, the disclosed systems generate a corrected color image by correcting dark eye regions in a low frequency layer of the digital image by replacing the dark eye regions with candidate eye regions. Furthermore, in one or more embodiments the disclosed systems generate a corrected texture image by correcting wrinkles in a high frequency layer by processing the digital image utilizing a smoothing algorithm. The disclosed systems further generate a corrected digital image by combining the corrected color image and the corrected texture image.Type: ApplicationFiled: December 29, 2020Publication date: June 30, 2022Inventors: Angad Kumar Gupta, Sourabh Gupta
-
Publication number: 20220138950Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for detecting and indicating modifications between a digital image and a modified version of a digital image. For example, the disclosed systems generates an ordered collection of change records in response to detecting modifications to the digital image. The disclosed systems generates determine one or more non-contiguous modified regions of pixels in the digital image based on the change records. The disclosed system generate an edited region indicator corresponding to the non-contiguous modified regions. The disclosed systems can further color-code the edited region indicator at an object level based on objects in the modified version of the digital image.Type: ApplicationFiled: November 2, 2020Publication date: May 5, 2022Inventors: Angad Kumar Gupta, Gagan Singhal
-
Publication number: 20220122231Abstract: Methods and systems are provided for facilitating automated braces removal from individuals in images. In embodiments described herein, an indication to remove the braces from an individual wearing braces in an image is obtained. Based on receiving the indication to remove the braces, automatically, without user intervention, a teeth region is identified in the image that includes teeth of the individual, and a braces region is identified that includes braces visible in the teeth region. The teeth region and braces region are used to generate an edited image that includes the individual without braces.Type: ApplicationFiled: October 16, 2020Publication date: April 21, 2022Inventors: Angad Kumar Gupta, Sourabh Gupta
-
Patent number: 11132821Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating a modified vector drawing based on user input in a magnified view. The disclosed system presents a vector drawing comprising anchor points in a drawing view. In one or more embodiments, the disclosed system determines that a user interaction (e.g., a user touch gesture) with the vector drawing results in an ambiguous selection of two or more anchor points. Based on this determination, the disclosed system can determine a magnification level for a magnified view in which the two or more anchor points are spaced at least a touch diameter. The disclosed system may receive a selection of an anchor point in the magnified view and user input indicating an operation to be performed on the selected anchor point. The disclosed system can generate a modified vector drawing by performing the operation.Type: GrantFiled: May 26, 2020Date of Patent: September 28, 2021Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Taniya Vij
-
Patent number: 11087514Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.Type: GrantFiled: June 11, 2019Date of Patent: August 10, 2021Assignee: Adobe Inc.Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
-
Publication number: 20200394828Abstract: Techniques for automatically synchronizing poses of objects in an image or between multiple images. An automatic pose synchronization functionality is provided by an image editor. The image editor identifies or enables a user to select objects (e.g., people) whose poses are to be synchronized and the image editor then performs processing to automatically synchronize the poses of the identified objects. For two objects whose poses are to be synchronized, a reference object is identified as one whose associated pose is to be used as a reference pose. A target object is identified as one whose associated pose is to be modified to match the reference pose of the reference object. An output image is generated by the image editor in which the position of a part of the target object is modified such that the pose associated with the target object matches the reference pose of the reference object.Type: ApplicationFiled: June 11, 2019Publication date: December 17, 2020Inventors: Sankalp Shukla, Sourabh Gupta, Angad Kumar Gupta
-
Patent number: 10740925Abstract: Object tracking verification techniques are described as implemented by a computing device. In one example, feature points are selected on and along a boundary of an object to be tracked, e.g., in an initial frame of a digital video, which are referred to as “feature points.” Tracking of the feature points is verified by the computing device between frames. If the feature points have been found to deviate from the object, the feature points are reselected. To verify the feature points, a number of tracked features points in a subsequent frame is compared to a number of feature points used to initiate tracking with respect to a threshold. Based on this comparison, if a number of feature points is “lost” in the subsequent frame that is greater than the threshold, the feature points are reselected for tracking the object in subsequent frames of the video.Type: GrantFiled: August 29, 2018Date of Patent: August 11, 2020Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Abhishek Shah
-
Publication number: 20200074673Abstract: Object tracking verification techniques are described as implemented by a computing device. In one example, feature points are selected on and along a boundary of an object to be tracked, e.g., in an initial frame of a digital video, which are referred to as “feature points.” Tracking of the feature points is verified by the computing device between frames. If the feature points have been found to deviate from the object, the feature points are reselected. To verify the feature points, a number of tracked features points in a subsequent frame is compared to a number of feature points used to initiate tracking with respect to a threshold. Based on this comparison, if a number of feature points is “lost” in the subsequent frame that is greater than the threshold, the feature points are reselected for tracking the object in subsequent frames of the video.Type: ApplicationFiled: August 29, 2018Publication date: March 5, 2020Applicant: Adobe Inc.Inventors: Angad Kumar Gupta, Abhishek Shah
-
Patent number: 10558849Abstract: Depicted skin selection is described. An image processing system selects portions of a digital image that correspond to exposed skin of persons depicted in the digital image without selecting other portions. Initially, the image processing system determines a bounding box for each person depicted in the digital image. Based solely on the portion of the digital image within the bounding box, the image processing system generates an object mask indicative of the pixels of the digital image corresponding to a respective person. Portions of the digital image outside the bounding box are not used for generating this object mask. The image processing system then identifies the pixels of the digital image indicated by the object mask and having a similar color to a range of exposed skin colors determined for the respective person. The processing system generates skin selection data describing the identified pixels and enabling the exposed skin selection.Type: GrantFiled: December 11, 2017Date of Patent: February 11, 2020Assignee: Adobe Inc.Inventors: Angad Kumar Gupta, Gagan Singhal
-
Publication number: 20190180083Abstract: Depicted skin selection is described. An image processing system selects portions of a digital image that correspond to exposed skin of persons depicted in the digital image without selecting other portions. Initially, the image processing system determines a bounding box for each person depicted in the digital image. Based solely on the portion of the digital image within the bounding box, the image processing system generates an object mask indicative of the pixels of the digital image corresponding to a respective person. Portions of the digital image outside the bounding box are not used for generating this object mask. The image processing system then identifies the pixels of the digital image indicated by the object mask and having a similar color to a range of exposed skin colors determined for the respective person. The processing system generates skin selection data describing the identified pixels and enabling the exposed skin selection.Type: ApplicationFiled: December 11, 2017Publication date: June 13, 2019Applicant: Adobe Inc.Inventors: Angad Kumar Gupta, Gagan Singhal
-
Patent number: 9858296Abstract: A technique for selecting a representative image from a group of digital images includes extracting data representing an image of a face of a person from each image in the group using a face recognition algorithm, determining a score for each image based on one or more quality parameters that are satisfied for the respective image, and selecting the image having the highest score as the representative image for the group. The quality parameters may be based on any quantifiable characteristics of the data. Each of these quality parameters may be uniquely weighted, so as to define the relative importance of one parameter with respect to another. The score for determining the representative image of the group may be obtained by adding together the weights corresponding to each quality parameter that is satisfied for a given image. Once selected, the representative image may be displayed in a graphical user interface.Type: GrantFiled: March 31, 2016Date of Patent: January 2, 2018Assignee: Adobe Systems IncorporatedInventors: Angad Kumar Gupta, Alok Kumar Singh, Ram Prasad Purumala
-
Patent number: 9785699Abstract: Photograph organization based on facial recognition is described. In one or more embodiments, a photograph organization module obtains multiple photographs having images of faces and recognizes the faces in the multiple photographs. The module builds a population by attempting to distinguish individual persons among the faces in the multiple photographs, with each person of the population corresponding to a group of multiple groups. After a first pass through the faces, the population includes a number of duplicative persons. With a second pass, the photograph organization module reduces the number of duplicative persons in the population by merging two or more groups of the multiple groups to produce a reduced number of groups. The merging is performed based on comparisons of the faces corresponding to the two or more groups. The multiple photographs are organized based on the reduced number of groups. Organization can include tagging or displaying grouped photographs.Type: GrantFiled: February 4, 2016Date of Patent: October 10, 2017Assignee: Adobe Systems IncorporatedInventors: Angad Kumar Gupta, Ram Prasad Purumala, Nitish Singla, Alok Kumar Singh
-
Publication number: 20170286452Abstract: A technique for selecting a representative image from a group of digital images includes extracting data representing an image of a face of a person from each image in the group using a face recognition algorithm, determining a score for each image based on one or more quality parameters that are satisfied for the respective image, and selecting the image having the highest score as the representative image for the group. The quality parameters may be based on any quantifiable characteristics of the data. Each of these quality parameters may be uniquely weighted, so as to define the relative importance of one parameter with respect to another. The score for determining the representative image of the group may be obtained by adding together the weights corresponding to each quality parameter that is satisfied for a given image. Once selected, the representative image may be displayed in a graphical user interface.Type: ApplicationFiled: March 31, 2016Publication date: October 5, 2017Applicant: Adobe Systems IncorporatedInventors: Angad Kumar Gupta, Alok Kumar Singh, Ram Prasad Purumala