Patents by Inventor Aseem Omprakash Agarwala

Aseem Omprakash Agarwala has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240134909
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Dingzeyu LI, Kim Pascal PIMMEL, Hijung SHIN, Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Joy Oakyung KIM, Joel Richard BRANDT, Cristin Ailidh Fraser
  • Publication number: 20240134597
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a question search for meaningful questions that appear in a video. In an example embodiment, an audio track from a video is transcribed, and the transcript is parsed to identify sentences that end with a question mark. Depending on the embodiment, one or more types of questions are filtered out, such as short questions less than a designated length or duration, logistical questions, and/or rhetorical questions. As such, in response to a command to perform a question search, the questions are identified, and search result tiles representing video segments of the questions are presented. Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Anh Lan TRUONG, Hanieh DEILAMSALEHY, Kim Pascal PIMMEL, Aseem Omprakash AGARWALA, Dingzeyu Li, Joel Richard BRANDT, Joy Oakyung KIM
  • Publication number: 20240135973
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for identifying candidate boundaries for video segments, video segment selection using those boundaries, and text-based video editing of video segments selected via transcript interactions. In an example implementation, boundaries of detected sentences and words are extracted from a transcript, the boundaries are retimed into an adjacent speech gap to a location where voice or audio activity is a minimum, and the resulting boundaries are stored as candidate boundaries for video segments. As such, a transcript interface presents the transcript, interprets input selecting transcript text as an instruction to select a video segment with corresponding boundaries selected from the candidate boundaries, and interprets commands that are traditionally thought of as text-based operations (e.g., cut, copy, paste) as an instruction to perform a corresponding video editing operation using the selected video segment.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 25, 2024
    Inventors: Xue BAI, Justin Jonathan SALAMON, Aseem Omprakash AGARWALA, Hijung SHIN, Haoran CAI, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA, Cristin Ailidh Fraser
  • Publication number: 20240126994
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for segmenting a transcript into paragraphs. In an example embodiment, a transcript is segmented to start a new paragraph whenever there is a change in speaker and/or a long pause in speech. If any remaining paragraphs are longer than a designated length or duration (e.g., 50 or 100 words), each of those paragraphs is segmented using dynamic programming to minimize a cost function that penalizes candidate paragraphs based on divergence from a target paragraph length and/or that rewards candidate paragraphs that group semantically similar sentences. As such, the transcript is visualized, segmented at the identified paragraphs.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Hanieh DEILAMSALEHY, Aseem Omprakash AGARWALA, Haoran CAI, Hijung SHIN, Joel Richard BRANDT, Lubomira Assenova DONTCHEVA
  • Publication number: 20240127855
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for selection of the best image of a particular speaker's face in a video, and visualization in a diarized transcript. In an example embodiment, candidate images of a face of a detected speaker are extracted from frames of a video identified by a detected face track for the face, and a representative image of the detected speaker's face is selected from the candidate images based on image quality, facial emotion (e.g., using an emotion classifier that generates a happiness score), a size factor (e.g., favoring larger images), and/or penalizing images that appear towards the beginning or end of a face track. As such, each segment of the transcript is presented with the representative image of the speaker who spoke that segment and/or input is accepted changing the representative image associated with each speaker.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Lubomira Assenova DONTCHEVA, Xue BAI, Aseem Omprakash AGARWALA, Joel Richard BRANDT
  • Publication number: 20240127857
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for face-aware speaker diarization. In an example embodiment, an audio-only speaker diarization technique is applied to generate an audio-only speaker diarization of a video, an audio-visual speaker diarization technique is applied to generate a face-aware speaker diarization of the video, and the audio-only speaker diarization is refined using the face-aware speaker diarization to generate a hybrid speaker diarization that links detected faces to detected voices. In some embodiments, to accommodate videos with small faces that appear pixelated, a cropped image of any given face is extracted from each frame of the video, and the size of the cropped image is used to select a corresponding active speaker detection model to predict an active speaker score for the face in the cropped image.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Haoran CAI, Lubomira Assenova DONTCHEVA
  • Publication number: 20240127820
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for music-aware speaker diarization. In an example embodiment, one or more audio classifiers detect speech and music independently of each other, which facilitates detecting regions in an audio track that contain music but do not contain speech. These music-only regions are compared to the transcript, and any transcription and speakers that overlap in time with the music-only regions are removed from the transcript. In some embodiments, rather than having the transcript display the text from this detected music, a visual representation of the audio waveform is included in the corresponding regions of the transcript.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Inventors: Justin Jonathan SALAMON, Fabian David CABA HEILBRON, Xue BAI, Aseem Omprakash AGARWALA, Hijung SHIN, Lubomira Assenova DONTCHEVA
  • Publication number: 20170061257
    Abstract: Example systems and methods for classifying visual patterns into a plurality of classes are presented. Using reference visual patterns of known classification, at least one image or visual pattern classifier is generated, which is then employed to classify a plurality of candidate visual patterns of unknown classification. The classification scheme employed may be hierarchical or nonhierarchical. The types of visual patterns may be fonts, human faces, or any other type of visual patterns or images subject to classification.
    Type: Application
    Filed: November 11, 2016
    Publication date: March 2, 2017
    Inventors: JIANCHAO YANG, GUANG CHEN, HAILIN JIN, JONATHAN BRANDT, ELYA SHECHTMAN, ASEEM OMPRAKASH AGARWALA
  • Patent number: 9524449
    Abstract: Example systems and methods for classifying visual patterns into a plurality of classes are presented. Using reference visual patterns of known classification, at least one image or visual pattern classifier is generated, which is then employed to classify a plurality of candidate visual patterns of unknown classification. The classification scheme employed may be hierarchical or nonhierarchical. The types of visual patterns may be fonts, human faces, or any other type of visual patterns or images subject to classification.
    Type: Grant
    Filed: December 16, 2013
    Date of Patent: December 20, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Guang Chen, Hailin Jin, Jonathan Brandt, Elya Shechtman, Aseem Omprakash Agarwala
  • Publication number: 20160364633
    Abstract: A convolutional neural network (CNN) is trained for font recognition and font similarity learning. In a training phase, text images with font labels are synthesized by introducing variances to minimize the gap between the training images and real-world text images. Training images are generated and input into the CNN. The output is fed into an N-way softmax function dependent on the number of fonts the CNN is being trained on, producing a distribution of classified text images over N class labels. In a testing phase, each test image is normalized in height and squeezed in aspect ratio resulting in a plurality of test patches. The CNN averages the probabilities of each test patch belonging to a set of fonts to obtain a classification. Feature representations may be extracted and utilized to define font similarity between fonts, which may be utilized in font suggestion, font browsing, or font recognition applications.
    Type: Application
    Filed: June 9, 2015
    Publication date: December 15, 2016
    Inventors: JIANCHAO YANG, ZHANGYANG WANG, JONATHAN BRANDT, HAILIN JIN, ELYA SHECHTMAN, ASEEM OMPRAKASH AGARWALA
  • Patent number: 9501724
    Abstract: A convolutional neural network (CNN) is trained for font recognition and font similarity learning. In a training phase, text images with font labels are synthesized by introducing variances to minimize the gap between the training images and real-world text images. Training images are generated and input into the CNN. The output is fed into an N-way softmax function dependent on the number of fonts the CNN is being trained on, producing a distribution of classified text images over N class labels. In a testing phase, each test image is normalized in height and squeezed in aspect ratio resulting in a plurality of test patches. The CNN averages the probabilities of each test patch belonging to a set of fonts to obtain a classification. Feature representations may be extracted and utilized to define font similarity between fonts, which may be utilized in font suggestion, font browsing, or font recognition applications.
    Type: Grant
    Filed: June 9, 2015
    Date of Patent: November 22, 2016
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Zhangyang Wang, Jonathan Brandt, Hailin Jin, Elya Shechtman, Aseem Omprakash Agarwala
  • Patent number: 9141885
    Abstract: A system may be configured as an image recognition machine that utilizes an image feature representation called local feature embedding (LFE). LFE enables generation of a feature vector that captures salient visual properties of an image to address both the fine-grained aspects and the coarse-grained aspects of recognizing a visual pattern depicted in the image. Configured to utilize image feature vectors with LFE, the system may implement a nearest class mean (NCM) classifier, as well as a scalable recognition algorithm with metric learning and max margin template selection. Accordingly, the system may be updated to accommodate new classes with very little added computational cost. This may have the effect of enabling the system to readily handle open-ended image classification problems.
    Type: Grant
    Filed: July 29, 2013
    Date of Patent: September 22, 2015
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Jianchao Yang, Guang Chen, Jonathan Brandt, Hailin Jin, Elya Shechtman, Aseem Omprakash Agarwala
  • Publication number: 20150170000
    Abstract: Example systems and methods for classifying visual patterns into a plurality of classes are presented. Using reference visual patterns of known classification, at least one image or visual pattern classifier is generated, which is then employed to classify a plurality of candidate visual patterns of unknown classification. The classification scheme employed may be hierarchical or nonhierarchical. The types of visual patterns may be fonts, human faces, or any other type of visual patterns or images subject to classification.
    Type: Application
    Filed: December 16, 2013
    Publication date: June 18, 2015
    Applicant: Adobe Systems Incorporated
    Inventors: Jianchao Yang, Guang Chen, Hailin Jin, Jonathan Brandt, Elya Shechtman, Aseem Omprakash Agarwala
  • Publication number: 20150030238
    Abstract: A system may be configured as an image recognition machine that utilizes an image feature representation called local feature embedding (LFE). LFE enables generation of a feature vector that captures salient visual properties of an image to address both the fine-grained aspects and the coarse-grained aspects of recognizing a visual pattern depicted in the image. Configured to utilize image feature vectors with LFE, the system may implement a nearest class mean (NCM) classifier, as well as a scalable recognition algorithm with metric learning and max margin template selection. Accordingly, the system may be updated to accommodate new classes with very little added computational cost. This may have the effect of enabling the system to readily handle open-ended image classification problems.
    Type: Application
    Filed: July 29, 2013
    Publication date: January 29, 2015
    Applicant: Adobe Systems Incorporated
    Inventors: Jianchao Yang, Guang Chen, Jonathan Brandt, Hailin Jin, Elya Shechtman, Aseem Omprakash Agarwala