Patents by Inventor Manjunath Ramachandra

Manjunath Ramachandra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10140259
    Abstract: Disclosed herein is a method and system for dynamically generating multimedia content file. The method comprises receiving, by a multimedia content generator, description of an event from a user. The method comprises identifying one or more keywords from the description of the event. Further, the method comprises mapping the one or more identified keywords with one or more images, related to one or more objects, stored in a content database for generating one or more scenes related to the description. An initial-level multimedia content file is generated by composing the one or more scenes. Furthermore the method comprises receiving one or more inputs on the initial-level multimedia content file from the user. Finally, a final-level multimedia content file is generated based on the one or more inputs received on the initial-level multimedia content file.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: November 27, 2018
    Assignee: WIPRO LIMITED
    Inventors: Manjunath Ramachandra Iyer, Sawani Bade, Jijith Nadumuri Ravi
  • Publication number: 20180336417
    Abstract: Disclosed subject matter relates to paraphrasing multimedia content including a method and system for generating a contextual summary of multimedia content. A contextual summary generator retrieves the multimedia content comprising scenes from a multimedia content database and generates scene descriptors, describing a scene, for each scene. Further, an emotion factor is identified in each scene based on each scene descriptor, each speech descriptor and each textual descriptor associated with each of the one or more scenes. Upon identifying the emotion factor, a context descriptor indicating context of each scene is generated for each scene based on analysis of each emotion factor and non-speech descriptors.
    Type: Application
    Filed: June 30, 2017
    Publication date: November 22, 2018
    Inventors: Adrita BARARI, Manjunath RAMACHANDRA, Ghulam MOHIUDDIN KHAN
  • Publication number: 20180285062
    Abstract: A method and system are described for controlling an Internet of Things (IoT) device using multi-modal gesture commands. The method includes receiving one or more multi-modal gesture commands comprising at least one of one or more personalized gesture commands and one or more personalized voice commands of a user. The method includes detecting one or more multi-modal gesture commands using at least one of a gesture grammar database and a voice grammar database. The method includes determining one or more control parameters and IoT device status information associated with a plurality of IoT devices in response to the detection. The method includes identifying IoT device that user intends to control from plurality of IoT devices based on user requirement, IoT device status information, and line of sight information associated with user. The method includes controlling identified IoT device based on one or more control parameters and IoT device status information.
    Type: Application
    Filed: March 28, 2017
    Publication date: October 4, 2018
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Publication number: 20180286383
    Abstract: This disclosure relates generally to the text-to-speech synthesis and more particularly to a system and method for rendering textual messages using customized natural voice. In one embodiment, a system for rendering textual messages using customized natural voice, is disclosed, comprising a processor and a memory communicatively coupled to the processor. The memory stores processor instructions, which, on execution, causes the processor to receive present textual messages and at least one of previous textual messages, response to the previous textual messages or receiver's context. The processor further predicts final emotional state of sender's customized natural voice based on an intermediate emotional state and the receiver's context. The processor further synthesizes the sender's customized natural voice based on the predicted final emotional state of the sender's customized natural voice, voice samples and voice parameters associated with the at least one sender.
    Type: Application
    Filed: May 24, 2017
    Publication date: October 4, 2018
    Inventors: Adrita Barari, Ghulam Mohiuddin Khan, Manjunath Ramachandra Iyer
  • Publication number: 20180284900
    Abstract: This technology relates to virtual assistance including a method and system for providing gesture-based interaction with a virtual product from a remote location. A virtual assistance system identifies in real-time, direct or personified user actions performed by a user on the virtual product from images and videos. Further, user actions are mapped with predefined user actions related to virtual product to determine values associated with predefined qualifiers and predefined consequences associated with the mapped predefined user actions. Furthermore, pre-stored images and pre-stored videos corresponding to the mapped predefined user actions, values associated with the predefined qualifiers and the predefined consequences, and detected active objects are extracted.
    Type: Application
    Filed: March 28, 2017
    Publication date: October 4, 2018
    Inventors: Manjunath Ramachandra, Jijith Nadumuri Ravi
  • Publication number: 20180262798
    Abstract: In one embodiment, a method for rendering multimedia content on a user device is disclosed. The method includes receiving, by a content rendering device, multimedia content from a multimedia server. The multimedia content is stored in the multimedia server in form of chunks. The method includes transforming, by the content rendering device, at least one chunk of the multimedia content based on user preferences. The transforming comprises adjusting time duration of the multimedia content based on a user-defined time duration, Further, the method includes rendering, by the content rendering device, the transformed multimedia content on the user device.
    Type: Application
    Filed: March 20, 2017
    Publication date: September 13, 2018
    Inventor: Manjunath RAMACHANDRA
  • Publication number: 20180232785
    Abstract: The present disclosure relates to a method and system for obtaining interactive user feedback in real-time by feedback obtaining system. The feedback obtaining system establishes connection between user device of user and server of service provider based on user location received from user device, receives static data of user from server and dynamic data of user from capturing device located at site of service provider, identify contextual information associated with user based on static data and dynamic data, provide one or more feedback queries for user from database based on contextual information, provide one or more sub-feedback queries for user based on response of user for one or more feedback queries and obtains user feedback based on response of user for one or more sub-feedback queries and one or more feedback queries and implicit feedback. The use of implicit feedback together with actual feedback gives effective feedback of users.
    Type: Application
    Filed: March 31, 2017
    Publication date: August 16, 2018
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Patent number: 9940932
    Abstract: This disclosure relates generally to speech recognition, and more particularly to system and method for speech-to-text conversion using audio as well as video input. In one embodiment, a method is provided for performing speech to text conversion. The method comprises receiving an audio data and a video data of a user while the user is speaking, generating a first raw text based on the audio data via one or more audio-to-text conversion algorithms, generating a second raw text based on the video data via one or more video-to-text conversion algorithms, determining one or more errors by comparing the first raw text and the second raw text, and correcting the one or more errors by applying one or more rules. The one or more rules employ at least one of a domain specific word database, a context of conversation, and a prior communication history.
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: April 10, 2018
    Assignee: WIPRO LIMITED
    Inventors: Manjunath Ramachandra, Priyanshu Sharma
  • Publication number: 20180032884
    Abstract: Disclosed herein is method and system for dynamically generating adaptive responses to user interactions. A response generating system receives user interactions including user queries from the user. Generic characteristics and user specific features associated with the user are extracted by integrating the user interactions with pre-stored conversation history and data from data sources associated with the user. Further, domain specific keywords from the user interactions are extracted for identifying the domain associated with the user queries. Personality of the user is detected based on the generic characteristics and the user specific features. Finally, adaptive responses to the user queries are dynamically generated based on the personality of the user and the domain associated with the user queries. The method aims at enhancing overall user experience and satisfaction in the conversation along with minimizing the total time required for addressing the user queries.
    Type: Application
    Filed: September 20, 2016
    Publication date: February 1, 2018
    Inventors: Meenakshi Sundaram Murugeshan, Manjunath Ramachandra
  • Publication number: 20170315966
    Abstract: Disclosed herein is a method and system for dynamically generating multimedia content file. The method comprises receiving, by a multimedia content generator, description of an event from a user. The method comprises identifying one or more keywords from the description of the event. Further, the method comprises mapping the one or more identified keywords with one or more images, related to one or more objects, stored in a content database for generating one or more scenes related to the description. An initial-level multimedia content file is generated by composing the one or more scenes. Furthermore the method comprises receiving one or more inputs on the initial-level multimedia content file from the user. Finally, a final-level multimedia content file is generated based on the one or more inputs received on the initial-level multimedia content file.
    Type: Application
    Filed: June 10, 2016
    Publication date: November 2, 2017
    Inventors: Manjunath Ramachandra IYER, Sawani BADE, Jijith Nadumuri RAVI
  • Publication number: 20170256262
    Abstract: This disclosure relates generally to speech recognition, and more particularly to system and method for speech-to-text conversion using audio as well as video input. In one embodiment, a method is provided for performing speech to text conversion. The method comprises receiving an audio data and a video data of a user while the user is speaking, generating a first raw text based on the audio data via one or more audio-to-text conversion algorithms, generating a second raw text based on the video data via one or more video-to-text conversion algorithms, determining one or more errors by comparing the first raw text and the second raw text, and correcting the one or more errors by applying one or more rules. The one or more rules employ at least one of a domain specific word database, a context of conversation, and a prior communication history.
    Type: Application
    Filed: March 15, 2016
    Publication date: September 7, 2017
    Inventors: Manjunath RAMACHANDRA, Priyanshu SHARMA
  • Patent number: 9756570
    Abstract: A method and a system are provided for optimizing battery usage of an electronic device. The method comprises determining, by a battery optimization unit, a degree of criticality of environment in which one or more sensors are operating based on one or more pre-defined conditions. The method further comprises determining, by the battery optimization unit, a plurality of parameters comprising an energy level of the electronic device, an available processing power, and an available communication network bandwidth associated with the electronic device. The method further comprises processing, by the battery optimization unit, a first portion of sensor data locally based on the degree of criticality of environment and a priority based rule engine, wherein the priority based rule engine is configured to optimize battery usage of the electronic device based on the plurality of parameters.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: September 5, 2017
    Assignee: WIPRO LIMITED
    Inventor: Manjunath Ramachandra
  • Patent number: 9554085
    Abstract: Embodiments of the present disclosure disclose a method and a device for dynamically controlling quality of a video displaying on a display associated to an electronic device is provided. The method comprises detecting current eye position of a user and identifying at least one region of interest (ROI) on a display screen of the display device based on the current eye position of the user. Then, the method comprises predicting next position of the eye based on at least one of the current eye position of the user or the at least one ROI. Also, the method comprises converting the SD video in to a high definition (HD) video displayed on the ROI on the display screen associated with the current and next position of the eye. Further, the method comprises displaying the HD video on the ROI of the display screen.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: January 24, 2017
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Jijith Nadumuri Ravi, Vijay Garg
  • Publication number: 20160366365
    Abstract: Embodiments of the present disclosure disclose a method and a device for dynamically controlling quality of a video displaying on a display associated to an electronic device is provided. The method comprises detecting current eye position of a user and identifying at least one region of interest (ROI) on a display screen of the display device based on the current eye position of the user. Then, the method comprises predicting next position of the eye based on at least one of the current eye position of the user or the at least one ROI. Also, the method comprises converting the SD video in to a high definition (HD) video displayed on the ROI on the display screen associated with the current and next position of the eye. Further, the method comprises displaying the HD video on the ROI of the display screen.
    Type: Application
    Filed: July 30, 2015
    Publication date: December 15, 2016
    Inventors: Manjunath Ramachandra Iyer, Jijith Nadumuri Ravi, Vijay Garg