Patents by Inventor Kundan Shrivastava
Kundan Shrivastava has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180130496Abstract: The disclosed embodiments illustrate method and system for auto-generation of the sketch notes-based visual summary of the multimedia content. The method includes determining one or more segments, based on one or more transitions in the multimedia content. The method further includes generating a transcript based on audio content associated with each determined segment. The method further includes retrieving a set of images from an image repository based on each of the identified one or more keywords. The method further includes generating a sketch image of each of one or more of the retrieved set of images associated with each of the identified one or more keywords. The method further includes rendering the sketch notes-based visual summary of the multimedia content, generated based on at least generated one or more sketch images, on a user interface displayed on a display screen of the user-computing device.Type: ApplicationFiled: November 8, 2016Publication date: May 10, 2018Inventors: Jyotirmaya Mahapatra, Fabin Rasheed, Kundan Shrivastava, Nimmi Rangaswamy
-
Publication number: 20180060495Abstract: The disclosed embodiments illustrate methods and systems to recommend medical intervention activities for caregivers and/or human subjects. The method comprises receipt of medical data of a human subject and contextual data of a plurality of caregivers of the human subjects. The medical data includes periodic physiological data of the human subject received in real-time. A customized ontology is generated, from a pre-determined ontology related to a medical condition, based on the received medical data and the contextual data. Thereafter, a recommendation of medical intervention activities is generated for at least one of the plurality of caregivers and/or the human subject, based on analysis of the generated customized ontology, caregiver profiles, the received contextual data, and the periodic physiological data. The generated recommendation is transmitted to computing devices of the at least one caregiver and/or the human subject.Type: ApplicationFiled: August 29, 2016Publication date: March 1, 2018Inventors: Jyotirmaya Mahapatra, Kundan Shrivastava, Nimmi Rangaswamy, Saurabh Srivastava
-
Publication number: 20180025050Abstract: A method and a system to detect disengagement of a first user viewing an ongoing multimedia content on a user computing device are disclosed. In an embodiment, one or more activity-based contextual cues and one or more behavior-based contextual cues of the first user viewing the ongoing multimedia content on the first user-computing device are received. Further, the disengagement of the first user is detected based on the one or more activity-based contextual cues and/or and one or more behavior-based contextual cues. Based on the detected disengagement of the first user, one or more queries are rendered on a user interface displayed on a display screen of the first user-computing device.Type: ApplicationFiled: July 21, 2016Publication date: January 25, 2018Inventors: Kuldeep Yadav, Arijit Biswas, Kundan Shrivastava, Om D Deshmukh, Kushal Srivastava, Deepali Jain
-
Publication number: 20180011828Abstract: The disclosed embodiments illustrate methods for recommending multimedia segments in multimedia content associated with online educational courses for annotation via a user interface. The method includes extracting one or more features associated with the multimedia content, wherein a feature of the one or more features corresponds to at least a requirement of an exemplary instance. The method further includes selecting a set of multimedia segments from one or more multimedia segments in the multimedia content, based on historical data that corresponds to interaction of one or more users with the multimedia content and the extracted one or more features associated with the multimedia content. Further, the method includes recommending the selected set of multimedia segments in the multimedia content through the user interface displayed on the user-computing device associated with a user, wherein the user annotates the recommended set of multimedia segments in the multimedia content.Type: ApplicationFiled: July 8, 2016Publication date: January 11, 2018Inventors: Kuldeep Yadav, Kundan Shrivastava, Sonal S. Patil, Om D. Deshmukh, Nischal Murthy Piratla
-
Publication number: 20170318013Abstract: The disclosed embodiments illustrate methods for voice-based user authentication and content evaluation. The method includes receiving a voice input of a user from a user-computing device, wherein the voice input corresponds to a response to a query. The method further includes authenticating the user based on a comparison of a voiceprint of the voice input and a sample voiceprint of the user. Further, the method includes evaluating content of the response of the user based on the authentication and a comparison between text content and a set of pre-defined answers to the query, wherein the text content is determined based on the received voice input.Type: ApplicationFiled: April 29, 2016Publication date: November 2, 2017Inventors: Shourya Roy, Kundan Shrivastava, Om D Deshmukh
-
Patent number: 9785834Abstract: According to embodiments illustrated herein, a method and system is provided for indexing a multimedia content. The method includes extracting, by one or more processors, a set of frames from the multimedia content, wherein the set of frames comprises at least one of a human object and an inanimate object. Thereafter, a body language information pertaining to the human object is determined from the set of frames by utilizing one or more image processing techniques. Further, an interaction information is determined from the set of frames. The interaction information is indicative of an action performed by the human object on the inanimate object. Thereafter, the multimedia content is indexed in a content database based at least on the body language information and the interaction information.Type: GrantFiled: July 14, 2015Date of Patent: October 10, 2017Assignee: VIDEOKEN, INC.Inventors: Arijit Biswas, Harish Arsikere, Kundan Shrivastava, Om D Deshmukh
-
Publication number: 20170017838Abstract: According to embodiments illustrated herein, a method and system is provided for indexing a multimedia content. The method includes extracting, by one or more processors, a set of frames from the multimedia content, wherein the set of frames comprises at least one of a human object and an inanimate object. Thereafter, a body language information pertaining to the human object is determined from the set of frames by utilizing one or more image processing techniques. Further, an interaction information is determined from the set of frames. The interaction information is indicative of an action performed by the human object on the inanimate object. Thereafter, the multimedia content is indexed in a content database based at least on the body language information and the interaction information.Type: ApplicationFiled: July 14, 2015Publication date: January 19, 2017Inventors: Arijit Biswas, Harish Arsikere, Kundan Shrivastava, Om D. Deshmukh
-
Publication number: 20170017861Abstract: The disclosed embodiments relate to a method for content recommendation. The method includes determining, by one or more processors, one or more features of a segment of a first content being accessed during a presentation of the first content on a user-computing device. The segment of the first content is accessed for a predetermined number of times. The method further includes extracting for a feature from the one or more features, a second content based on the feature, wherein the second content is recommended through the user-computing device.Type: ApplicationFiled: July 17, 2015Publication date: January 19, 2017Inventors: Sonal S. Patil, Kundan Shrivastava, Om D. Deshmukh
-
Patent number: 9484032Abstract: The disclosed embodiments illustrate methods and systems for processing multimedia content. The method includes extracting one or more words from an audio stream associated with multimedia content. Each word has associated one or more timestamps indicative of temporal occurrences of said word in said multimedia content. The method further includes creating a word cloud of said one or more words in said multimedia content based on a measure of emphasis laid on each word in said multimedia content and said one or more timestamps associated with said one or more words. The method further includes presenting one or more multimedia snippets, of said multimedia content, associated with a word selected by a user from said word cloud. Each of said one or more multimedia snippets corresponds to said one or more timestamps associated with occurrences of said word in said multimedia content.Type: GrantFiled: October 27, 2014Date of Patent: November 1, 2016Assignee: Xerox CorporationInventors: Kuldeep Yadav, Kundan Shrivastava, Om D Deshmukh
-
Publication number: 20160307563Abstract: The disclosed embodiments illustrate methods and systems for detecting plagiarism in a conversation. The method includes receiving first input corresponding to a query from a first user in said conversation. The first input corresponds to at least a first audio signal received from said first user. The method includes receiving second input corresponding to one or more responses received from a second user in response to said query. The second input corresponds to at least a second audio signal received from said second user. Thereafter, the method includes determining a first score for one or more websites, based on a comparison between said one or more responses and content obtained from said one or more websites in response to said query. The first score is a measure of a similarity between said one or more responses and said content. The method is performed by one or more microprocessors.Type: ApplicationFiled: April 15, 2015Publication date: October 20, 2016Inventors: Kundan Shrivastava, Om D. Deshmukh, Geetha Manjunath
-
Publication number: 20160118060Abstract: The disclosed embodiments illustrate methods and systems for processing multimedia content. The method includes extracting one or more words from an audio stream associated with multimedia content. Each word has associated one or more timestamps indicative of temporal occurrences of said word in said multimedia content. The method further includes creating a word cloud of said one or more words in said multimedia content based on a measure of emphasis laid on each word in said multimedia content and said one or more timestamps associated with said one or more words. The method further includes presenting one or more multimedia snippets, of said multimedia content, associated with a word selected by a user from said word cloud. Each of said one or more multimedia snippets corresponds to said one or more timestamps associated with occurrences of said word in said multimedia content.Type: ApplicationFiled: October 27, 2014Publication date: April 28, 2016Inventors: Kuldeep Yadav, Kundan Shrivastava, Om D. Deshmukh
-
Patent number: 8892444Abstract: Methods and arrangements for improving quality of content in voice applications. A specification is provided for acceptable content for a voice application, and user generated audio content for the voice application is inputted. At least one test is applied to the user generated audio content, and it is thereupon determined as to whether the user generated audio content meets the provided specification.Type: GrantFiled: July 27, 2011Date of Patent: November 18, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Kundan Shrivastava
-
Patent number: 8892445Abstract: Methods and arrangements for improving quality of content in voice applications. A specification is provided for acceptable content for a voice application, and user generated audio content for the voice application is inputted. At least one test is applied to the user generated audio content, and it is thereupon determined as to whether the user generated audio content meets the provided specification.Type: GrantFiled: August 29, 2012Date of Patent: November 18, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Kundan Shrivastava
-
Patent number: 8831955Abstract: Methods and arrangements for facilitating tangible interactions in voice applications. At least two tangible objects are provided, along with a measurement interface. The at least two tangible objects are disposed to each be displaceable with respect to one another and with respect to the measurement interface. The measurement interface is communicatively connected with a voice application. At least one of the two tangible objects is displaced with respect to the measurement interface, and the displacement of at least one of the at least two tangible objects is converted to input for the voice application.Type: GrantFiled: August 31, 2011Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Shrey Sahay, Saurabh Srivastava, Kundan Shrivastava
-
Patent number: 8819012Abstract: A method, an apparatus and an article of manufacture for accessing a specific location in voice site audio content. The method includes indexing, in a voice site index, a specific location in the voice site that contains the audio content, mapping the audio content with information regarding the location and adding the mapped content to the index of the voice site, using the index to determine content and location of an input query in the voice site, automatically marking the specific location in the voice site that contains the determined content and location of the input query, and automatically transferring to the marked location in the voice site.Type: GrantFiled: August 30, 2011Date of Patent: August 26, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Kundan Shrivastava
-
Patent number: 8731930Abstract: A method for contextual voice query dilation in a Spoken Web search includes determining a context in which a voice query is created, generating a set of multiple voice query terms based on the context and information derived by a speech recognizer component pertaining to the voice query, and processing the set of query terms with at least one dilation operator to produce a dilated set of queries. A method for performing a search on a voice query is also provided, including generating a set of multiple query terms based on information derived by a speech recognizer component processing a voice query, processing the set with multiple dilation operators to produce multiple dilated sub-sets of query terms, selecting at least one query term from each dilated sub-set to compose a query set, and performing a search on the query set.Type: GrantFiled: August 8, 2012Date of Patent: May 20, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Kundan Shrivastava
-
Patent number: 8719025Abstract: An apparatus and an article of manufacture for contextual voice query dilation in a Spoken Web search include determining a context in which a voice query is created, generating a set of multiple voice query terms based on the context and information derived by a speech recognizer component pertaining to the voice query, and processing the set of query terms with at least one dilation operator to produce a dilated set of queries.Type: GrantFiled: May 14, 2012Date of Patent: May 6, 2014Assignee: International Business Machines CorporationInventors: Nitendra Rajput, Kundan Shrivastava
-
Publication number: 20130304468Abstract: A method for contextual voice query dilation in a Spoken Web search includes determining a context in which a voice query is created, generating a set of multiple voice query terms based on the context and information derived by a speech recognizer component pertaining to the voice query, and processing the set of query terms with at least one dilation operator to produce a dilated set of queries. A method for performing a search on a voice query is also provided, including generating a set of multiple query terms based on information derived by a speech recognizer component processing a voice query, processing the set with multiple dilation operators to produce multiple dilated sub-sets of query terms, selecting at least one query term from each dilated sub-set to compose a query set, and performing a search on the query set.Type: ApplicationFiled: August 8, 2012Publication date: November 14, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nitendra Rajput, Kundan Shrivastava
-
Publication number: 20130304471Abstract: A method, an apparatus and an article of manufacture for contextual voice query dilation in a Spoken Web search. The method includes determining a context in which a voice query is created, generating a set of multiple voice query terms based on the context and information derived by a speech recognizer component pertaining to the voice query, and processing the set of query terms with at least one dilation operator to produce a dilated set of queries. A method for performing a search on a voice query is provided, including generating a set of multiple query terms based on information derived by a speech recognizer component processing a voice query, processing the set with multiple dilation operators to produce multiple dilated sub-sets of query terms, selecting at least one query term from each dilated sub-set to compose a query set, and performing a search on the query set.Type: ApplicationFiled: May 14, 2012Publication date: November 14, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nitendra Rajput, Kundan Shrivastava
-
Publication number: 20130054247Abstract: Methods and arrangements for facilitating tangible interactions in voice applications. At least two tangible objects are provided, along with a measurement interface. The at least two tangible objects are disposed to each be displaceable with respect to one another and with respect to the measurement interface. The measurement interface is communicatively connected with a voice application. At least one of the two tangible objects is displaced with respect to the measurement interface, and the displacement of at least one of the at least two tangible objects is converted to input for the voice application.Type: ApplicationFiled: August 31, 2011Publication date: February 28, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nitendra Rajput, Shrey Sahay, Saurabh Srivastava, Kundan Shrivastava