Patents by Inventor Andaleeb Fatima

Andaleeb Fatima has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10991141
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: April 27, 2021
    Assignee: ADOBE INC.
    Inventors: Abhishek Shah, Andaleeb Fatima
  • Publication number: 20200051300
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Application
    Filed: October 17, 2019
    Publication date: February 13, 2020
    Inventors: Abhishek Shah, Andaleeb Fatima
  • Patent number: 10475222
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Abhishek Shah, Andaleeb Fatima
  • Patent number: 10276213
    Abstract: Systems and methods disclosed herein provide automatic and intelligent video sorting in the context of creating video compositions. A computing device sorts a media bin of videos in the user's work area based on similarity to the videos included in the video composition being created. When a user selects or includes a particular video on the composition's timeline, the video is compared against the entire video collection to change the display of videos in the media bin. In one example, videos that have similar tags to a selected video are prioritized at the top. Only a subset of frames of each of the videos are used to use to identify video tags. Intelligently selecting tags using a subset of frames from each video rather than using all frames enables more efficient and accurate tagging of videos, which facilitates quicker and more accurate comparison of video similarities.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: April 30, 2019
    Assignee: Adobe Inc.
    Inventors: Sagar Tandon, Andaleeb Fatima, Abhishek Shah
  • Publication number: 20190073811
    Abstract: Systems and techniques are disclosed for automatically creating a group shot image by intelligently selecting a best frame of a video clip to use as a base frame and then intelligently merging features of other frames into the base frame. In an embodiment, this involves determining emotional alignment scores and eye scores for the individual frames of the video clip. The emotional alignment scores for the frames are determined by assessing the faces in each of the frames with respect to an emotional characteristic (e.g., happy, sad, neutral, etc.). The eye scores for the frames are determined based on assessing the states of the eyes (e.g., fully open, partially open, closed, etc.) of the faces in individual frames. Comprehensive scores for the individual frames are determined based on the emotional alignment scores and the eye scores, and the frame having the best comprehensive score is selected as the base frame.
    Type: Application
    Filed: September 5, 2017
    Publication date: March 7, 2019
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Abhishek SHAH, Andaleeb FATIMA
  • Publication number: 20180336931
    Abstract: Systems and methods disclosed herein provide automatic and intelligent video sorting in the context of creating video compositions. A computing device sorts a media bin of videos in the user's work area based on similarity to the videos included in the video composition being created. When a user selects or includes a particular video on the composition's timeline, the video is compared against the entire video collection to change the display of videos in the media bin. In one example, videos that have similar tags to a selected video are prioritized at the top. Only a subset of frames of each of the videos are used to use to identify video tags. Intelligently selecting tags using a subset of frames from each video rather than using all frames enables more efficient and accurate tagging of videos, which facilitates quicker and more accurate comparison of video similarities.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Inventors: Sagar TANDON, Andaleeb FATIMA, Abhishek SHAH
  • Patent number: 10089534
    Abstract: Various embodiments calculate a score for each frame of a video segment based on various subject-related factors associated with a subject (e.g., face or other object) captured in a frame relative to corresponding factors of the subject in other frames of the video segment. A highest-scoring frame from the video segment can then be extracted based on a comparison of the score of each frame of the video segment with the score of each other frame of the video segment, and the extracted frame can be transcoded as an image for display via a display device. The score calculation, extraction, and transcoding actions are performed automatically and without user intervention, which improves previous approaches that use a primarily manual, tedious, and time consuming approach.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: October 2, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Abhishek Shah, Sagar Tandon, Andaleeb Fatima
  • Publication number: 20180173959
    Abstract: Various embodiments calculate a score for each frame of a video segment based on various subject-related factors associated with a subject (e.g., face or other object) captured in a frame relative to corresponding factors of the subject in other frames of the video segment. A highest-scoring frame from the video segment can then be extracted based on a comparison of the score of each frame of the video segment with the score of each other frame of the video segment, and the extracted frame can be transcoded as an image for display via a display device. The score calculation, extraction, and transcoding actions are performed automatically and without user intervention, which improves previous approaches that use a primarily manual, tedious, and time consuming approach.
    Type: Application
    Filed: December 16, 2016
    Publication date: June 21, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Abhishek Shah, Sagar Tandon, Andaleeb Fatima