Patents by Inventor Om D. Deshmukh

Om D. Deshmukh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10404806
    Abstract: A method and a system are provided for segmenting a multimedia content. The method estimates a count of a plurality of multimedia segments in the multimedia content, and a duration of each of the plurality of multimedia segments in the multimedia content. The method determines a cost function associated with a multimedia segment from the plurality of multimedia segments, based on the count of the plurality of multimedia segments, and the duration of each of the plurality of multimedia segments. The method further determines an updated count of the plurality of multimedia segments, and an updated duration of each of the plurality of multimedia segments until the cost function satisfies a pre-defined criteria. Based on the updated count of the plurality of multimedia segments, and the updated duration of each of the plurality of multimedia segments, the method segments the multimedia content into the plurality of multimedia segments.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: September 3, 2019
    Inventors: Arijit Biswas, Ankit Gandhi, Ranjeet Kumar, Om D Deshmukh
  • Patent number: 10296533
    Abstract: The disclosed embodiments illustrate methods of generation of a table of content by processing multimedia content. The method includes identifying a set of key-phrases from the multimedia content based on one or more external data sources. The method further includes determining one or more segments of the multimedia content, based on the identified set of key-phrases, wherein a segment of the determined one or more segments comprises a subset of key-phrases from the set of key-phrases. The method further includes selecting at least a key-phrase from the subset of key-phrases of each of the corresponding one or more segments. The method further includes generating the table of content based on the selected key-phrase from each of the one or more segments, wherein the selected key-phrase from each of the one or more segments in the generated table of content is utilized to navigate through the multimedia content.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: May 21, 2019
    Inventors: Sanket Sanjay Barhate, Sahil Loomba, Ankit Gandhi, Arijit Biswas, Sumit Negi, Om D Deshmukh
  • Patent number: 10127824
    Abstract: Features are extracted from visual and audio modalities of a video to infer the location of figures/tables/equations/graphs/flow-charts determined as video anchor points which are highlighted on the video timeline to enable quick navigation and provide a quick summary of the video. A voice-based mechanism navigates to a point-of-interest in the video. In case of bandwidth-constrained settings, videos are often played at a very low resolution (quality), and often users need to increase video resolution manually to understand content presented in the figures. Using the automatic identification of these aforementioned anchored points, the resolution can be changed dynamically during streaming a video, which will provide a better viewing experience.
    Type: Grant
    Filed: April 1, 2016
    Date of Patent: November 13, 2018
    Inventors: Kuldeep Yadav, Arijit Biswas, Ankit Gandhi, Sumit Negi, Om D. Deshmukh
  • Patent number: 10056083
    Abstract: The disclosed embodiments illustrate method and system of processing multimedia content to generate a text transcript. The method includes segmenting each of a set of text frames to determine spatial regions. The method further includes extracting one or more keywords from each of the determined spatial regions. The method further includes determining the first set of keywords from the extracted one or more keywords based on filtering of one or more off-topic keywords from the extracted one or more keywords. The method further includes extracting a second set of keywords based on the determined first set of keywords. The method further includes generating a graph between each of a first set of keywords and one or more of a second set of keywords. The method further includes dynamically generating the text transcript of audio content in the multimedia content based on the generated graph.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: August 21, 2018
    Inventors: Sumit Negi, Sonal S Patil, Arijit Biswas, Ankit Gandhi, Om D Deshmukh
  • Publication number: 20180144424
    Abstract: The disclosed embodiments illustrate methods and systems for querying multiple healthcare-related knowledge graphs. The method includes retrieving a set of healthcare-related response sub-graphs from a plurality of healthcare-related knowledge graphs based on a keyword-based query. The method further includes generating a first set of healthcare-related ranked sub-graphs corresponding to the plurality of healthcare-related knowledge graphs. The method further includes generating a set of healthcare-related connected sub-graphs, based on at least healthcare-related ranked sub-graphs in the plurality of healthcare-related knowledge graphs. The method further includes generating a second set of healthcare-related ranked sub-graphs based on at least a ranking of healthcare-related connected sub-graphs in the set of healthcare-related connected sub-graphs.
    Type: Application
    Filed: November 18, 2016
    Publication date: May 24, 2018
    Inventors: Archana Sahu, Kaushik Baruah, Sumit Negi, Om D. Deshmukh
  • Publication number: 20180108354
    Abstract: The disclosed embodiments illustrate method and system of processing multimedia content to generate a text transcript. The method includes segmenting each of a set of text frames to determine spatial regions. The method further includes extracting one or more keywords from each of the determined spatial regions. The method further includes determining the first set of keywords from the extracted one or more keywords based on filtering of one or more off-topic keywords from the extracted one or more keywords. The method further includes extracting a second set of keywords based on the determined first set of keywords. The method further includes generating a graph between each of a first set of keywords and one or more of a second set of keywords. The method further includes dynamically generating the text transcript of audio content in the multimedia content based on the generated graph.
    Type: Application
    Filed: October 18, 2016
    Publication date: April 19, 2018
    Inventors: Sumit Negi, Sonal S. Patil, Arijit Biswas, Ankit Gandhi, Om D. Deshmukh
  • Patent number: 9934449
    Abstract: A method for detecting one or more topic transitions in a multimedia content includes identifying, one or more frames from a plurality of frames of the multimedia content based on a comparison between one or more content items in a first frame of the plurality of frames, and the one or more content items in a first set of frames of the plurality of frames. The method further includes determining at least a first score, and a second score for each of the one or more frames. Additionally, the method includes determining a likelihood for each of the one or more frames based at least on the first score, and the second score, wherein the likelihood is indicative of a topic transition among the one or more frames.
    Type: Grant
    Filed: February 4, 2016
    Date of Patent: April 3, 2018
    Assignee: VIDEOKEN, INC.
    Inventors: Ankit Gandhi, Arijit Biswas, Om D Deshmukh
  • Publication number: 20180060984
    Abstract: A method and a system are provided for content processing to determine pre-requisite subject matters for subject matters in multimedia content. The method determines a set of pre-requisite concepts and a set of outcome concepts for each of a set of multimedia content of a course. The method determines a concept coverage score based on at least the determined set of pre-requisite concepts and the determined set of outcome concepts. The method further determines a relevance score of each pre-requisite concept that corresponds to the set of pre-requisite concepts. The method further determines a weighted score for one of the first set of multimedia content based on the determined concept coverage score and the determined relevance score of one or more of the set of pre-requisite concepts. Further, the method determines a set of pre-requisite subject matters for the subject matters based on at least the determined weighted score.
    Type: Application
    Filed: August 30, 2016
    Publication date: March 1, 2018
    Inventors: Ankit Gandhi, Arijit Biswas, Om D. Deshmukh, Sahil Loomba
  • Publication number: 20180048943
    Abstract: The disclosed embodiments illustrate method for rendering time-compressed multimedia content on a user-computing device. The method includes determining metadata for one or more frames in multimedia content based on each of one or more time-compression factors and one or more attributes of the multimedia content. Further, the determined metadata comprises a binary value associated with each of the one or more frames of the multimedia content. The method further includes transmitting the multimedia content associated with the determined metadata to the user-computing device, based at least on a time-compression factor in a user request received from the user-computing device. Further, the transmitted multimedia content is rendered on the user-computing device as the time-compressed multimedia content.
    Type: Application
    Filed: August 11, 2016
    Publication date: February 15, 2018
    Inventors: Vinay Melkote, Om D. Deshmukh, Sumit Negi, Sonal S. Patil, Ankita Patil
  • Publication number: 20180039629
    Abstract: The disclosed embodiments illustrate methods and systems of multimedia content processing for automatic ranking of online multimedia items. The method includes selecting one or more multimedia items based on a user-request. For a multimedia item from the one or more selected multimedia items, the method includes determining a plurality of features from the multimedia item based on audio content, text content and visual content associated with the multimedia item. The method further includes classifying the plurality of features into a plurality of speaking style categories. The method further includes determining an engagement score for the multimedia item based on the classified plurality of features and a weight associated with each of the plurality of speaking style. The method further includes ranking the one or more multimedia items based on at least the determined engagement score associated with each of the one or more multimedia items.
    Type: Application
    Filed: August 3, 2016
    Publication date: February 8, 2018
    Inventors: Harish Arsikere, Sonal S. Patil, Om D. Deshmukh
  • Publication number: 20180039637
    Abstract: The disclosed embodiments illustrate methods and systems for multimedia processing to identify concepts in multimedia content. The method includes receiving the multimedia content ant at least one annotation of multimedia content at a computing device from another computing device. The received at least one annotation includes a plurality of keywords that is representative of at least a plurality of concepts in the received multimedia content. The method further includes extracting a plurality of features from the received multimedia content by performing a statistical analysis of the multimedia content, based on the plurality of keywords in the at least one annotation. The method further includes identifying the plurality of concepts in a set of frames of the multimedia content by use of one or more classifiers. The one or more classifiers are trained, based on at the extracted plurality of features.
    Type: Application
    Filed: August 2, 2016
    Publication date: February 8, 2018
    Inventors: Ankit Gandhi, Arijit Biswas, Om D. Deshmukh, Sohil Shah, Kuldeep Kulkarni
  • Publication number: 20180025050
    Abstract: A method and a system to detect disengagement of a first user viewing an ongoing multimedia content on a user computing device are disclosed. In an embodiment, one or more activity-based contextual cues and one or more behavior-based contextual cues of the first user viewing the ongoing multimedia content on the first user-computing device are received. Further, the disengagement of the first user is detected based on the one or more activity-based contextual cues and/or and one or more behavior-based contextual cues. Based on the detected disengagement of the first user, one or more queries are rendered on a user interface displayed on a display screen of the first user-computing device.
    Type: Application
    Filed: July 21, 2016
    Publication date: January 25, 2018
    Inventors: Kuldeep Yadav, Arijit Biswas, Kundan Shrivastava, Om D Deshmukh, Kushal Srivastava, Deepali Jain
  • Publication number: 20180011828
    Abstract: The disclosed embodiments illustrate methods for recommending multimedia segments in multimedia content associated with online educational courses for annotation via a user interface. The method includes extracting one or more features associated with the multimedia content, wherein a feature of the one or more features corresponds to at least a requirement of an exemplary instance. The method further includes selecting a set of multimedia segments from one or more multimedia segments in the multimedia content, based on historical data that corresponds to interaction of one or more users with the multimedia content and the extracted one or more features associated with the multimedia content. Further, the method includes recommending the selected set of multimedia segments in the multimedia content through the user interface displayed on the user-computing device associated with a user, wherein the user annotates the recommended set of multimedia segments in the multimedia content.
    Type: Application
    Filed: July 8, 2016
    Publication date: January 11, 2018
    Inventors: Kuldeep Yadav, Kundan Shrivastava, Sonal S. Patil, Om D. Deshmukh, Nischal Murthy Piratla
  • Publication number: 20180011860
    Abstract: The disclosed embodiments illustrate methods of generation of a table of content by processing multimedia content. The method includes identifying a set of key-phrases from the multimedia content based on one or more external data sources. The method further includes determining one or more segments of the multimedia content, based on the identified set of key-phrases, wherein a segment of the determined one or more segments comprises a subset of key-phrases from the set of key-phrases. The method further includes selecting at least a key-phrase from the subset of key-phrases of each of the corresponding one or more segments. The method further includes generating the table of content based on the selected key-phrase from each of the one or more segments, wherein the selected key-phrase from each of the one or more segments in the generated table of content is utilized to navigate through the multimedia content.
    Type: Application
    Filed: July 7, 2016
    Publication date: January 11, 2018
    Inventors: Sanket Sanjay Barhate, Sahil Loomba, Ankit Gandhi, Arijit Biswas, Sumit Negi, Om D. Deshmukh
  • Publication number: 20170358273
    Abstract: An image display system and method dynamically adjusts a resolution of a streamed image corresponding to determined visual saliency scores of the streamed image. A viewer display, a resolution adaptation engine and a visual saliency score calculation engine are included.
    Type: Application
    Filed: June 10, 2016
    Publication date: December 14, 2017
    Applicant: YEN4KEN, INC.
    Inventors: Sumit Negi, Kuldeep Yadav, Om D. Deshmukh
  • Patent number: 9830516
    Abstract: Embodiments disclose methods, systems and non-transitory computer readable medium for joint temporal segmentation and classification of user activities in an egocentric video.
    Type: Grant
    Filed: July 7, 2016
    Date of Patent: November 28, 2017
    Assignee: VideoKen, Inc.
    Inventors: Sovan Biswas, Ankit Gandhi, Arijit Biswas, Om D Deshmukh
  • Publication number: 20170318013
    Abstract: The disclosed embodiments illustrate methods for voice-based user authentication and content evaluation. The method includes receiving a voice input of a user from a user-computing device, wherein the voice input corresponds to a response to a query. The method further includes authenticating the user based on a comparison of a voiceprint of the voice input and a sample voiceprint of the user. Further, the method includes evaluating content of the response of the user based on the authentication and a comparison between text content and a set of pre-defined answers to the query, wherein the text content is determined based on the received voice input.
    Type: Application
    Filed: April 29, 2016
    Publication date: November 2, 2017
    Inventors: Shourya Roy, Kundan Shrivastava, Om D Deshmukh
  • Publication number: 20170300752
    Abstract: A method and a system are provided for creating a summarized multimedia content. The method extracts one or more frames from a plurality of frames in a multimedia content based on a measure of area occupied by a text content in a portion of each of the plurality of frames. The method selects one or more sentences from an audio content associated with the multimedia content based on at least a weight associated with a plurality of words present in the audio content. The method extracts one or more audio segments from the audio content associated with the multimedia content based on one or more parameters associated with the audio content. The method creates the summarized multimedia content based on the one or more frames, the one or more sentences, and the one or more audio segments.
    Type: Application
    Filed: April 18, 2016
    Publication date: October 19, 2017
    Inventors: Arijit Biswas, Harish Arsikere, Pramod Sankar Kompalli, Kuldeep Yadav, Jagadeesh Chandra Bose Rantham Prabhakara, Kovendhan Ponnavaikko, Om D. Deshmukh, Mohana Prasad Sathya Moorthy
  • Patent number: 9785834
    Abstract: According to embodiments illustrated herein, a method and system is provided for indexing a multimedia content. The method includes extracting, by one or more processors, a set of frames from the multimedia content, wherein the set of frames comprises at least one of a human object and an inanimate object. Thereafter, a body language information pertaining to the human object is determined from the set of frames by utilizing one or more image processing techniques. Further, an interaction information is determined from the set of frames. The interaction information is indicative of an action performed by the human object on the inanimate object. Thereafter, the multimedia content is indexed in a content database based at least on the body language information and the interaction information.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: October 10, 2017
    Assignee: VIDEOKEN, INC.
    Inventors: Arijit Biswas, Harish Arsikere, Kundan Shrivastava, Om D Deshmukh
  • Publication number: 20170287346
    Abstract: Features are extracted from visual and audio modalities of a video to infer the location of figures/tables/equations/graphs/flow-charts determined as video anchor points which are highlighted on the video timeline to enable quick navigation and provide a quick summary of the video. A voice-based mechanism navigates to a point-of-interest in the video. In case of bandwidth-constrained settings, videos are often played at a very low resolution (quality), and often users need to increase video resolution manually to understand content presented in the figures. Using the automatic identification of these aforementioned anchored points, the resolution can be changed dynamically during streaming a video, which will provide a better viewing experience.
    Type: Application
    Filed: April 1, 2016
    Publication date: October 5, 2017
    Applicant: YEN4KEN INC.
    Inventors: Kuldeep Yadav, Arijit Biswas, Ankit Gandhi, Sumit Negi, Om D. Deshmukh