Patents by Inventor Krishna Singh Karki
Krishna Singh Karki has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11941049Abstract: A system identifies a video comprising frames associated with content tags. The system detects features for each frame of the video. The system identifies, based on the detected features, scenes of the video. The system determines, for each frame for each scene, a frame score that indicates a number of content tags that match the other frames within the scene. The system selects, for each scene, a set of key frames that represent the scene based on the determined frame scores. The system receives a search query comprising a keyword. The system generates, for display, search results responsive to the search query including a dynamic preview of the video. The dynamic preview comprises an arrangement of frames of the video corresponding to each scene of the video. Each of the arrangement of frames is selected from the selected set of key frames representing the respective scene of the video.Type: GrantFiled: September 2, 2022Date of Patent: March 26, 2024Assignee: Adobe Inc.Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Patent number: 11816147Abstract: Embodiments of the present invention are directed towards providing contextual tags for an image based on a contextual analysis of associated images captured in the same environment as the image. To determine contextual tags, content tags can be determined for images. The determined content tags can be associated with categories based on a contextual classification of the content tags. These associated content tags can then be designated as contextual tags for a respective category. To associate these contextual tags with the images, the images can be iterated through based on how the images relate to the contextual tags. For instance, when an image is associated with a category, the contextual tags classified into that category can be assigned to that image.Type: GrantFiled: November 14, 2019Date of Patent: November 14, 2023Assignee: Adobe Inc.Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Publication number: 20220414149Abstract: A system identifies a video comprising frames associated with content tags. The system detects features for each frame of the video. The system identifies, based on the detected features, scenes of the video. The system determines, for each frame for each scene, a frame score that indicates a number of content tags that match the other frames within the scene. The system selects, for each scene, a set of key frames that represent the scene based on the determined frame scores. The system receives a search query comprising a keyword. The system generates, for display, search results responsive to the search query including a dynamic preview of the video. The dynamic preview comprises an arrangement of frames of the video corresponding to each scene of the video. Each of the arrangement of frames is selected from the selected set of key frames representing the respective scene of the video.Type: ApplicationFiled: September 2, 2022Publication date: December 29, 2022Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Patent number: 11500927Abstract: Certain embodiments involve adaptive search results for multimedia search queries to provide dynamic previews. For instance, a computing system receives a search query that includes a keyword. The computing system identifies, based on the search query, a video file having keyframes with content tags that match the search query. The computing system determines matching scores for respective keyframes of the identified video file. The computing system generates a dynamic preview from at least two keyframes having the highest matching scores.Type: GrantFiled: October 3, 2019Date of Patent: November 15, 2022Assignee: Adobe Inc.Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Patent number: 11468786Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate dynamic tool-based animated tutorials. In particular, in one or more embodiments, the disclosed systems generate an animated tutorial in response to receiving a request associated with an image editing tool. The disclosed systems then extract steps from existing general tutorials that pertain to the image editing tool to generate tool-specific animated tutorials. In at least one embodiment, the disclosed systems utilize a clustering algorithm in conjunction with image parameters to provide a set of these generated animated tutorials that showcase diverse features and/or attributes of the image editing tool based on measured aesthetic gains resulting from application of the image editing tool within the animated tutorials.Type: GrantFiled: October 16, 2019Date of Patent: October 11, 2022Assignee: Adobe Inc.Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Patent number: 11361526Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.Type: GrantFiled: September 4, 2020Date of Patent: June 14, 2022Assignee: Adobe Inc.Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Karki
-
Publication number: 20220108506Abstract: Methods, systems, and computer storage media for providing tool tutorials based on tutorial information that is dynamically integrated into tool tutorial shells using graphics editing system operations in a graphics editing systems. In operation, an image is received in association with a graphics editing application. Tool parameters (e.g., image-specific tool parameters) are generated based on processing the image. The tool parameters are generated for a graphics editing tool of the graphics editing application. The graphics editing tool (e.g., object removal tool or spot healing tool) can be a premium version of a simplified version of the graphics editing tool in a freemium application service. Based on the tool parameters and the image, a tool tutorial data file is generated by incorporating the tool parameters and the image into a tool tutorial shell. The tool tutorial data file can be selectively rendered in an integrated interface of the graphics editing application.Type: ApplicationFiled: October 6, 2020Publication date: April 7, 2022Inventors: Subham Gupta, Krishna Singh Karki, Poonam Bhalla, Ajay Bedi
-
Publication number: 20210149947Abstract: Embodiments of the present invention are directed towards providing contextual tags for an image based on a contextual analysis of associated images captured in the same environment as the image. To determine contextual tags, content tags can be determined for images. The determined content tags can be associated with categories based on a contextual classification of the content tags. These associated content tags can then be designated as contextual tags for a respective category. To associate these contextual tags with the images, the images can be iterated through based on how the images relate to the contextual tags. For instance, when an image is associated with a category, the contextual tags classified into that category can be assigned to that image.Type: ApplicationFiled: November 14, 2019Publication date: May 20, 2021Inventors: Subham GUPTA, Poonam BHALLA, Krishna Singh KARKI, Ajay BEDI
-
Publication number: 20210118325Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that generate dynamic tool-based animated tutorials. In particular, in one or more embodiments, the disclosed systems generate an animated tutorial in response to receiving a request associated with an image editing tool. The disclosed systems then extract steps from existing general tutorials that pertain to the image editing tool to generate tool-specific animated tutorials. In at least one embodiment, the disclosed systems utilize a clustering algorithm in conjunction with image parameters to provide a set of these generated animated tutorials that showcase diverse features and/or attributes of the image editing tool based on measured aesthetic gains resulting from application of the image editing tool within the animated tutorials.Type: ApplicationFiled: October 16, 2019Publication date: April 22, 2021Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Publication number: 20210103615Abstract: Certain embodiments involve adaptive search results for multimedia search queries to provide dynamic previews. For instance, a computing system receives a search query that includes a keyword. The computing system identifies, based on the search query, a video file having keyframes with content tags that match the search query. The computing system determines matching scores for respective keyframes of the identified video file. The computing system generates a dynamic preview from at least two keyframes having the highest matching scores.Type: ApplicationFiled: October 3, 2019Publication date: April 8, 2021Inventors: Amol Jindal, Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Publication number: 20200401831Abstract: An image editing program can include a content-aware selection system. The content-aware selection system can enable a user to select an area of an image using a label or a tag that identifies object in the image, rather than having to make a selection area based on coordinates and/or pixel values. The program can receive a digital image and metadata that describes an object in the image. The program can further receive a label, and can determine from the metadata that the label is associated with the object. The program can then select a bounding box for the object, and identify in the bounding box, pixels that represent the object. The program can then output a selection area that surrounds the pixels.Type: ApplicationFiled: September 4, 2020Publication date: December 24, 2020Inventors: Subham Gupta, Ajay Bedi, Poonam Bhalla, Krishna Singh Karki
-
Patent number: 10652472Abstract: Embodiments relate to automatic perspective and horizon correction. Generally, a camera captures an image as an image file. Capture-time orientation data from one or more sensors is used to determine the camera's attitude with respect to a defined reference frame. The orientation data and/or attitude can be registered into metadata of the image file and used to generate axis lines representative of the camera's reference frame. A reference line such as a horizon can be automatically identified from detected line segments in the image that align with one of the axis lines within a predetermined angular threshold. The reference line can be used to generate a camera transformation from a starting orientation reflected by the camera attitude to a transformed orientation that aligns the reference line with the reference frame. The transformation can be applied to the image to automatically correct perspective distortion and/or horizon tilt in the image.Type: GrantFiled: February 22, 2018Date of Patent: May 12, 2020Assignee: ADOBE INC.Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Patent number: 10586308Abstract: Techniques for removal of obstructions in a digital image scene are described, in which target and source digital images that exhibit parallax, one to another, are obtained that were captured together by an image capture device at a similar point in time using two different lenses of the image capture device. A foreground obstruction is identified based on displacement in apparent position of objects in the target and source digital images. The foreground obstruction is removed from the target digital image, such as by generating an obstruction mask that represents the location of the foreground obstruction and copying pixels from the source digital image to the target digital image based on the locations identified in the obstruction mask. The target digital image with the obstruction removed is output to a user interface or service provider system, for example.Type: GrantFiled: May 9, 2017Date of Patent: March 10, 2020Assignee: Adobe Inc.Inventors: Krishna Singh Karki, Subham Gupta, Poonam Bhalla, Ajay Bedi
-
Patent number: 10440276Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating focused preview images that include the subjects of interest of digital images. For example, in one or more embodiments, the disclosed system utilizes a machine-learning model to generate a saliency map for a digital image to indicate one or more salient objects portrayed within the digital image. Additionally, in one or more embodiments, the system identifies a focus region based on focus information captured by a camera device at the time of capturing the digital image. Furthermore, the system can then utilize the saliency map and the focus region to generate a focused preview image. For instance, in one or more embodiments, the system crops the digital image based on an overlapping portion of the saliency map and the focus region to generate a focused preview image.Type: GrantFiled: November 2, 2017Date of Patent: October 8, 2019Assignee: Adobe Inc.Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Publication number: 20190260939Abstract: Embodiments relate to automatic perspective and horizon correction. Generally, a camera captures an image as an image file. Capture-time orientation data from one or more sensors is used to determine the camera's attitude with respect to a defined reference frame. The orientation data and/or attitude can be registered into metadata of the image file and used to generate axis lines representative of the camera's reference frame. A reference line such as a horizon can be automatically identified from detected line segments in the image that align with one of the axis lines within a predetermined angular threshold. The reference line can be used to generate a camera transformation from a starting orientation reflected by the camera attitude to a transformed orientation that aligns the reference line with the reference frame. The transformation can be applied to the image to automatically correct perspective distortion and/or horizon tilt in the image.Type: ApplicationFiled: February 22, 2018Publication date: August 22, 2019Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Publication number: 20190132520Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating focused preview images that include the subjects of interest of digital images. For example, in one or more embodiments, the disclosed system utilizes a machine-learning model to generate a saliency map for a digital image to indicate one or more salient objects portrayed within the digital image. Additionally, in one or more embodiments, the system identifies a focus region based on focus information captured by a camera device at the time of capturing the digital image. Furthermore, the system can then utilize the saliency map and the focus region to generate a focused preview image. For instance, in one or more embodiments, the system crops the digital image based on an overlapping portion of the saliency map and the focus region to generate a focused preview image.Type: ApplicationFiled: November 2, 2017Publication date: May 2, 2019Inventors: Subham Gupta, Poonam Bhalla, Krishna Singh Karki, Ajay Bedi
-
Patent number: 10176616Abstract: Various embodiments receive frames as a stream captured during a camera session. During the camera session, faces and facial features of each face are detected from the frames. Then, each face in each frame is assigned a score based on the detected facial features. Using the scores, a candidate frame is selected for each individual face to represent a “best” representation of that face. In addition, an overall score is calculated for each frame based on a combination of assigned scores for the faces in the frame. Then, a reference frame is located from the frames based on the overall score for a respective frame. Faces from the candidate frames are then merged onto the reference frame, and an output image is generated for display.Type: GrantFiled: January 19, 2017Date of Patent: January 8, 2019Assignee: Adobe Inc.Inventors: Krishna Singh Karki, Vaibhav Jain, Subham Gupta, Poonam Bhalla, Ajay Bedi
-
Publication number: 20180330470Abstract: Techniques for removal of obstructions in a digital image scene are described, in which target and source digital images that exhibit parallax, one to another, are obtained that were captured together by an image capture device at a similar point in time using two different lenses of the image capture device. A foreground obstruction is identified based on displacement in apparent position of objects in the target and source digital images. The foreground obstruction is removed from the target digital image, such as by generating an obstruction mask that represents the location of the foreground obstruction and copying pixels from the source digital image to the target digital image based on the locations identified in the obstruction mask. The target digital image with the obstruction removed is output to a user interface or service provider system, for example.Type: ApplicationFiled: May 9, 2017Publication date: November 15, 2018Applicant: Adobe Systems IncorporatedInventors: Krishna Singh Karki, Subham Gupta, Poonam Bhalla, Ajay Bedi
-
Publication number: 20180204097Abstract: Various embodiments receive frames as a stream captured during a camera session. During the camera session, faces and facial features of each face are detected from the frames. Then, each face in each frame is assigned a score based on the detected facial features. Using the scores, a candidate frame is selected for each individual face to represent a “best” representation of that face. In addition, an overall score is calculated for each frame based on a combination of assigned scores for the faces in the frame. Then, a reference frame is located from the frames based on the overall score for a respective frame. Faces from the candidate frames are then merged onto the reference frame, and an output image is generated for display.Type: ApplicationFiled: January 19, 2017Publication date: July 19, 2018Applicant: Adobe Systems IncorporatedInventors: Krishna Singh Karki, Vaibhav Jain, Subham Gupta, Poonam Bhalla, Ajay Bedi
-
Patent number: 9900503Abstract: A reflection removal system is capable of automatically removing from digital images reflections related to photographic illumination sources (“flash”), at the time of capture, without user intervention. In an embodiment, the reflection removal system receives a set of digital images taken at substantially the same time from a camera, where one of the image is affected by flash. The images are divided into blocks, and a threshold is determined for each block, indicating the brightest pixel value related to the content of the block. The reflection removal system compares each pixel in the flash-affected image to the corresponding threshold, and generates a digital mask based on the comparison. The reflection removal system creates a corrected image based on the digital mask, such that pixels affected by flash are modified based on corresponding pixels in the other images. In an embodiment, the corrected image is color blended.Type: GrantFiled: January 12, 2017Date of Patent: February 20, 2018Assignee: Adobe Systems IncorporatedInventors: Ajay Bedi, Subham Gupta, Poonam Bhalla, Krishna Singh Karki