Patents by Inventor Abhishek Shah

Abhishek Shah has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190394350
    Abstract: Technologies for video-based document scanning are disclosed. The video scanning system may divide a video into segments. A segment has frames with a common feature. For a segment, the video scanning system is configured to rank the frames in the segment, e.g., based on motion characteristics, zoom characteristics, aesthetics characteristics, quality characteristics, etc., of the frames. Accordingly, the system can generate a scan from a selected frame in a segment, e.g., based on the rank of the selected frame in the segment.
    Type: Application
    Filed: June 25, 2018
    Publication date: December 26, 2019
    Inventors: Ankit Pangasa, Abhishek Shah
  • Publication number: 20190362471
    Abstract: Certain embodiments involve a model for enhancing text in electronic content. For example, a system obtains electronic content comprising input text and converts the electronic content into a grayscale image. The system also converts the grayscale image into a binary image using a grid-based grayscale-conversion filter, which can include: generating a grid of pixels on the grayscale image; determining a plurality of grid-pixel threshold values at intersection points in the grid of pixels; determining a plurality of estimated pixel threshold values based on the plurality of grid-pixel threshold values; and converting the grayscale image into the binary image using the plurality of grid-pixel threshold values and the plurality of estimated pixel threshold values. The system also generates an interpolated image based on the electronic content and the binary image. The interpolated image includes output text that is darker than the input text. The system can then output the interpolated image.
    Type: Application
    Filed: May 24, 2018
    Publication date: November 28, 2019
    Inventors: Ram Bhushan Agrawal, Ankit Pangasa, Abhishek Shah
  • Patent number: 10482610
    Abstract: An automated motion-blur detection process can detect frames in digital videos where only a part of the frame exhibits motion blur. Certain embodiments programmatically identify a plurality of feature points within a video clip, and calculate a speed of each feature point within the video clip. A collective speed of the plurality of feature points is determined based on the speed of each feature point. A selection factor is compared to a selection threshold for each video frame. The selection factor is based at least in part on the collective speed of the plurality of feature points. Based on this comparison, at least one video frame from within the video clip is selected. In some aspects, the selected video frame is relatively free of motion blur, even motion blur that occurs in only a part of the image.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: November 19, 2019
    Assignee: Adobe Inc.
    Inventors: Sagar Tandon, Abhishek Shah
  • Publication number: 20190343248
    Abstract: A gemstone setting that creates jewelry pieces of the same footprint at reduced cost. This disclosed gemstone composition has a brilliant, single center look. The gemstones are mounted on a base portion, said base portion is having a central cavity, said central cavity surrounded by a first set of round prong members. Said base portion comprising a first plurality of retaining cavities surrounding said central cavity, defining a second layer. Said second layer is surrounded by a second set of prong members. Said second set of prong members comprising at least one split prong.
    Type: Application
    Filed: May 11, 2018
    Publication date: November 14, 2019
    Applicant: KBS Creations
    Inventor: Abhishek Shah
  • Patent number: 10474903
    Abstract: Systems and methods for segmenting video. A segmentation application executing on a computing device receives a video including video frames. The segmentation application calculates, using a predictive model trained to evaluate quality of video frames, a first aesthetic score for a first video frame and a second aesthetic score for a second video frame. The segmentation application determines that the first aesthetic score and the second aesthetic score differ by a quality threshold and that a number of frames between the first video frame and the second video frame exceeds a duration threshold. The segmentation application creates a video segment by merging a subset of video frames ranging from the first video frame to an segment-end frame preceding the second video frame.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Sagar Tandon, Abhishek Shah
  • Patent number: 10475222
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Abhishek Shah, Andaleeb Fatima
  • Patent number: 10467788
    Abstract: Automatic frame selection and action shot generation techniques in a digital medium environment are described. A computing device identifies an object in a foreground of video data. A determination is then made by the computing device as to motion of the object exhibited between frames of the video data. A subset of frames is then selected by the computing device based on a determined motion of the identified object depicting an action sequence. An action shot is generated by the computing device by overlaying the identified objects in the selected frames on a background.
    Type: Grant
    Filed: May 3, 2017
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Vibha Tanda, Sagar Tandon, Abhishek Shah
  • Publication number: 20190228231
    Abstract: Systems and methods for segmenting video. A segmentation application executing on a computing device receives a video including video frames. The segmentation application calculates, using a predictive model trained to evaluate quality of video frames, a first aesthetic score for a first video frame and a second aesthetic score for a second video frame. The segmentation application determines that the first aesthetic score and the second aesthetic score differ by a quality threshold and that a number of frames between the first video frame and the second video frame exceeds a duration threshold. The segmentation application creates a video segment by merging a subset of video frames ranging from the first video frame to an segment-end frame preceding the second video frame.
    Type: Application
    Filed: January 25, 2018
    Publication date: July 25, 2019
    Inventors: Sagar Tandon, Abhishek Shah
  • Patent number: 10289291
    Abstract: A method for editing nested video sequences includes receiving a selection, by a user in a graphical user interface (GUI), of a video clip that corresponds to a nested video sequence of a parent video sequence. In response to the selection, each higher layer of the parent video sequence than a given layer that comprises the video clip is disabling from being rendered in a monitor view of the GUI. An image of the parent video sequence is rendered in the monitor view while each higher layer is disabled from being rendered. Also while each higher layer is disabled from being rendered, a manipulation by the user of a GUI element that corresponds to a graphical object from the nested video sequence is received. The manipulation is applied to the graphical object from the nested video sequence.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: May 14, 2019
    Assignee: Adobe Inc.
    Inventors: Abhishek Shah, Shailesh Kumar, Subbiah Muthuswamy Pillai
  • Publication number: 20190130585
    Abstract: An automated motion-blur detection process can detect frames in digital videos where only a part of the frame exhibits motion blur. Certain embodiments programmatically identify a plurality of feature points within a video clip, and calculate a speed of each feature point within the video clip. A collective speed of the plurality of feature points is determined based on the speed of each feature point. A selection factor is compared to a selection threshold for each video frame. The selection factor is based at least in part on the collective speed of the plurality of feature points. Based on this comparison, at least one video frame from within the video clip is selected. In some aspects, the selected video frame is relatively free of motion blur, even motion blur that occurs in only a part of the image.
    Type: Application
    Filed: November 1, 2017
    Publication date: May 2, 2019
    Inventors: Sagar Tandon, Abhishek Shah
  • Patent number: 10276213
    Abstract: Systems and methods disclosed herein provide automatic and intelligent video sorting in the context of creating video compositions. A computing device sorts a media bin of videos in the user's work area based on similarity to the videos included in the video composition being created. When a user selects or includes a particular video on the composition's timeline, the video is compared against the entire video collection to change the display of videos in the media bin. In one example, videos that have similar tags to a selected video are prioritized at the top. Only a subset of frames of each of the videos are used to use to identify video tags. Intelligently selecting tags using a subset of frames from each video rather than using all frames enables more efficient and accurate tagging of videos, which facilitates quicker and more accurate comparison of video similarities.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: April 30, 2019
    Assignee: Adobe Inc.
    Inventors: Sagar Tandon, Andaleeb Fatima, Abhishek Shah
  • Publication number: 20190073811
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Application
    Filed: September 5, 2017
    Publication date: March 7, 2019
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Abhishek SHAH, Andaleeb FATIMA
  • Patent number: 10192582
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to operations to facilitate generation of time-lapse videos. In accordance with embodiments described herein, frames of a photographic input are analyzed to detect activity occurring across frame pairs. The photographic input, such as video input, is input for which a time-lapse video is to be generated. Activity detected across frame pairs is used to automatically select a plurality of the frames for use in generating the time-lapse video. At least a portion of the frames selected in accordance with the activity detected across frame pairs is used to generate the time-lapse video.
    Type: Grant
    Filed: May 21, 2015
    Date of Patent: January 29, 2019
    Assignee: ADOBE INC.
    Inventors: Puneet Singhal, Abhishek Shah
  • Publication number: 20180336931
    Abstract: Systems and methods disclosed herein provide automatic and intelligent video sorting in the context of creating video compositions. A computing device sorts a media bin of videos in the user's work area based on similarity to the videos included in the video composition being created. When a user selects or includes a particular video on the composition's timeline, the video is compared against the entire video collection to change the display of videos in the media bin. In one example, videos that have similar tags to a selected video are prioritized at the top. Only a subset of frames of each of the videos are used to use to identify video tags. Intelligently selecting tags using a subset of frames from each video rather than using all frames enables more efficient and accurate tagging of videos, which facilitates quicker and more accurate comparison of video similarities.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Inventors: Sagar TANDON, Andaleeb FATIMA, Abhishek SHAH
  • Publication number: 20180322670
    Abstract: Automatic frame selection and action shot generation techniques in a digital medium environment are described. A computing device identifies an object in a foreground of video data. A determination is then made by the computing device as to motion of the object exhibited between frames of the video data. A subset of frames is then selected by the computing device based on a determined motion of the identified object depicting an action sequence.
    Type: Application
    Filed: May 3, 2017
    Publication date: November 8, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Vibha Tanda, Sagar Tandon, Abhishek Shah
  • Publication number: 20180308225
    Abstract: Computer-implemented systems and methods herein disclose automatic haze correction in a digital video. In one example, a video dehazing module identifies a scene including a set of video frames. The video dehazing module identifies the dark channel, brightness, and atmospheric light characteristics in the scene. For each video frame in the scene, the video dehazing module determines a unique haze correction amount parameter by taking into account the dark channel, brightness, and atmospheric light characteristics. The video dehazing module applies the unique haze correction amount parameters to each video frame and thereby generates a sequence of dehazed video frames.
    Type: Application
    Filed: June 28, 2018
    Publication date: October 25, 2018
    Inventors: Abhishek Shah, Gagan Singhal
  • Publication number: 20180300036
    Abstract: A digital medium environment is described to improve moving graphical user interface objects using predictive drop zones that are generated based on user input operations. In one example, a user input processing system receives user input, such as selection and movement of a graphical object. The user input processing system monitors the user input to determine velocity, acceleration, location, and direction of the graphical object as moved by the user input. From the monitoring, the user input processing system continuously determines a location for a predicted drop zone in the user interface that represents an ending point for the movement. The predicted drop zone is then rendered on the user interface in real-time until termination of the input, at which point the user input processing system moves the graphical object to the location of the predicted drop zone, rather than to a pointing device location.
    Type: Application
    Filed: April 13, 2017
    Publication date: October 18, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Sameer Bhatt, Abhishek Shah
  • Patent number: 10089534
    Abstract: Various embodiments calculate a score for each frame of a video segment based on various subject-related factors associated with a subject (e.g., face or other object) captured in a frame relative to corresponding factors of the subject in other frames of the video segment. A highest-scoring frame from the video segment can then be extracted based on a comparison of the score of each frame of the video segment with the score of each other frame of the video segment, and the extracted frame can be transcoded as an image for display via a display device. The score calculation, extraction, and transcoding actions are performed automatically and without user intervention, which improves previous approaches that use a primarily manual, tedious, and time consuming approach.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: October 2, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Abhishek Shah, Sagar Tandon, Andaleeb Fatima
  • Patent number: 10026160
    Abstract: Computer-implemented systems and methods herein disclose automatic haze correction in a digital video. A video dehazing module divides a digital video into multiple scenes, each scene including a set of video frames. For each scene, the video dehazing module identifies the dark channel, brightness, and atmospheric light characteristics in the scene. For each video frame in the scene, the video dehazing module determines a unique haze correction amount parameter by taking into account the dark channel, brightness, and atmospheric light characteristics. For each video frame, the video dehazing module also determines a unique haze correction sensitivity parameter by taking into account transmission map values in the scene. The video dehazing module applies the unique haze correction amount parameters and unique haze correction sensitivity parameters to each video frame generate a sequence of dehazed video frames.
    Type: Grant
    Filed: August 20, 2016
    Date of Patent: July 17, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Abhishek Shah, Gagan Singhal
  • Patent number: 10007847
    Abstract: A computer-implemented method of positioning a video frame within a collage cell includes, for a given one of a plurality of video frames, generating a polygon encompassing a portion of the respective video frame containing at least one visual element. The polygon has a center position corresponding to a first point within the respective video frame. The center position of the polygon of a given frame is then changed to a new center position based at least in part on an average center position of polygons encompassing portions of at least two consecutive video frames containing the visual element(s). The new center position corresponds to a second point within the given video frame. Next, a cropped portion of the given video frame encompassed by the polygon having the new center position is generated and displayed within a collage cell of a graphical user interface.
    Type: Grant
    Filed: October 6, 2016
    Date of Patent: June 26, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Abhishek Shah, Sameer Bhatt