Patents by Inventor Manjunath Ramachandra

Manjunath Ramachandra has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190248485
    Abstract: This disclosure relates generally to drones, and more particularly to method and system for performing inspection and maintenance tasks of three-dimensional structures (3D) using drones. In one embodiment, a method for performing a task with respect to a 3D structure is disclosed. The method includes receiving a simulated 3D view of the 3D structure. The simulated 3D view comprises a hierarchy of augmented views to different degrees. The method further includes configuring one or more paths for performing a task on the 3D structure based on the hierarchy of augmented views, historical data on substantially similar tasks, and a capability of the at least one drone. The method further includes learning maneuverability and operations with respect to the one or more paths and the task based on the historical data on substantially similar tasks, and effecting performance of the task based on the learning through the at least one drone.
    Type: Application
    Filed: March 27, 2018
    Publication date: August 15, 2019
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Publication number: 20190250782
    Abstract: A method and system for domain-based rendering of avatars to a user is disclosed. The method includes receiving, by a controller unit of a user device, a user input subsequent to launch of an application in the user device. The method further includes extracting a plurality of keywords and metadata from the user input. The method includes determining an application domain in association with the user input based on the plurality of keywords and the metadata. The method further includes selecting at least one avatar from an avatar database based on the application domain and a plurality of parameters. The method includes rendering the at least one avatar to the user to initiate a conversation.
    Type: Application
    Filed: March 30, 2018
    Publication date: August 15, 2019
    Inventor: Manjunath Ramachandra Iyer
  • Patent number: 10318601
    Abstract: The present disclosure relates to method and system for rendering multimedia content based on interest level of a user in real-time by a content rendering system. The content rendering system comprises detecting interest of user watching multimedia content, broadcasted by content provider based on set of parameters. The interest of user is on portion of multimedia content, determining metadata, object of interest, action and context from portion of multimedia content by processing image containing portion, generating search queries based on object of interest, action and context, extracting content similar to portion, broadcasted by one or more other content providers, based on search queries and metadata and combining extracted similar content with multimedia content currently viewed by user based on metadata to render multimedia content to user based on interest level of user in real-time. The present disclosure renders similar content from multiple content providers based on interest level of users.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: June 11, 2019
    Assignee: Wipro Limited
    Inventor: Manjunath Ramachandra Iyer
  • Publication number: 20190163838
    Abstract: Disclosed herein is method and system for processing multimodal user queries. The method comprises determining availability of one or more responses to each of one or more sub-queries, wherein the one or more sub-queries are formed by splitting the multimodal user queries. The method detects requirement of an expert to provide the one or more responses upon determining at least one of unavailability of the one or more responses by the response generation system or based predefined conditions. Thereafter, a summarized content is generated by summarizing context of the one or more sub-queries and historical conversation data associated with the one or more sub-queries. Based on the summarized content, the one or more sub-queries are reformulated. Finally, the one or more responses received, from the expert, for the reformulated one or more sub-queries are collated provided as the one or more responses for the multimodal user queries.
    Type: Application
    Filed: January 18, 2018
    Publication date: May 30, 2019
    Inventor: Manjunath Ramachandra Iyer
  • Publication number: 20190163785
    Abstract: Disclosed herein is method and system for providing domain-specific response to user query. The user query is split into one or more sub-queries and domain of each of the sub-queries is determined based on domain-specific keywords present in each of the sub-queries. One or more responses to each of the sub-queries is retrieved from corresponding Domain-specific Query Handlers (DQHs). Finally, each of the one or more responses are collated for providing the domain-specific query to the user. In an embodiment, the DQHs are hierarchically arranged based on their importance and relevance to the user query. Further, the resources are allocated to each of the DQHs based on their hierarchy, thereby optimally distributing the resources among the DQHs. In an embodiment, the method of present disclosure ensures completeness/sufficiency in the response, before collating the one or more responses and providing the domain-specific response to the user.
    Type: Application
    Filed: January 18, 2018
    Publication date: May 30, 2019
    Inventor: Manjunath Ramachandra Iyer
  • Publication number: 20190163749
    Abstract: The present disclosure discloses a method and system for providing response to a user input. The system receives a user input, processes the user input by finding equivalents of the user input and dividing each of the user input and the equivalents into a frame. One or more keywords are generated for each of the one or more frames. Further, each of the one or more frames are classified into one or more domains present in a knowledge graph. Then, one or more objects are determined in each of the corresponding one or more domains based on the corresponding one or more keywords. Further, a processing means is determined for each of the one or more objects based on the metadata of the corresponding one or more objects. The processing means is processed by the system for providing response to the user input.
    Type: Application
    Filed: January 18, 2018
    Publication date: May 30, 2019
    Inventors: Manjunath Ramachandra Iyer, Suyog Trivedi, Gopichand Agnihotram
  • Patent number: 10304455
    Abstract: Disclosed herein is a method and system for performing a task based on user input. One or more requirements related to the task are extracted from the user input. Based on the requirements, plurality of resources required for performing the task are retrieved and integrated to generate action sequences. Further, a simulated model is generated based on the action sequences and provided to the user for receiving user feedback. Finally, the action sequences are implemented based on the user feedback for performing the task. In an embodiment, the method of present disclosure is capable of automatically selecting and integrating resources required for implementing a task, thereby helps in reducing overall time required for implementing a task intended by the user.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: May 28, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Patent number: 10298895
    Abstract: Disclosed herein is a method and system for performing context-based transformation of a video. In an embodiment, a scene descriptor and a textual descriptor are generated for each scene corresponding to the video. Further, an audio context descriptor is generated based on semantic analysis of the textual descriptor. Subsequently, the audio context descriptor and the scene descriptor are correlated to generate a scene context descriptor for each scene. Finally, the video is translated using the scene context descriptor, thereby transforming the video based on context. In some embodiments, the method of present disclosure is capable of automatically changing one or more attributes, such as color of one or more scenes in the video, in response to change in the context of audio/speech signals corresponding to the video. Thus, the present method helps in effective rendering of a video to users.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: May 21, 2019
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra, Sethuraman Ulaganathan
  • Patent number: 10283163
    Abstract: The present disclosure discloses method and video generation system for generating video content based on user data. The video generation system receives user data sequentially from user, where each sequence of user data is converted into text data. One or more objects, relations, emotions, and actions from user data is identified by evaluating text data, a scene descriptor is generated for each sequence of user data, by associating one or more objects with one or more relations, emotions, and actions. The method comprises performing consistency check for scene descriptor of each sequence of user data, based on one or more previously stored scene descriptors, performing, one or more modifications to inconsistent scene descriptors, identified based on consistency check, generating, segments for each of scene descriptor and generating video content for by combining video segments associated with each of scene descriptor.
    Type: Grant
    Filed: March 31, 2018
    Date of Patent: May 7, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Patent number: 10282827
    Abstract: Systems and methods for removing rain streak distortion from a distorted video are described. The system receives sample non-distorted images and sample distorted images of a video. The sample non-distorted images are indicative of non-raining condition and the sample distorted images are indicative of raining condition in the video. The system further determines first temporal information from the sample distorted images and second temporal information from the sample non-distorted images. The first temporal information indicative of a change in the rain streak distortion pattern and the second temporal information indicative of a change in a non-rain streak distortion pattern. Further, the system correlates the first temporal information with the second temporal information to generate a training model comprising one or more trained weights.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: May 7, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde
  • Patent number: 10255502
    Abstract: A testing device for performing testing across a plurality of smart devices is disclosed. The testing device may be configured to register the plurality of smart devices to be accessed for performing testing. At least one time-window at which each smart device is idle may be determined, by the testing device, for the plurality of smart devices. Upon gathering the testing criteria and time duration for performing a testing operation, the testing device may be configured to dynamically create a test group that includes one or more smart devices from the plurality of smart devices such that the one or more smart devices in the test group satisfy the testing criteria and the at least one time-window of smart devices in the test group is within the time duration.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: April 9, 2019
    Assignee: Wipro Limited
    Inventors: Adrita Barari, Manjunath Ramachandra, Ghulam Mohiuddin Khan
  • Publication number: 20190102448
    Abstract: Embodiments of present disclosure discloses system and method for managing applications in electronic device. Initially, relation tree associated with applications is identified. The relation tree is generated based on learning technique implemented for applications, parameters, enablers associated with electronic device. Based on identified relation tree, enablers are identified from plurality of enablers, corresponding to each of applications. Further, current status of parameters based on current status of enablers is retrieved. The relation tree is updated based on learning technique implemented for at least one of the current status of the parameters, new applications, new parameters, enablers associated with the electronic device. An application from the applications is identified based on the current status of the parameters and the relation tree. The electronic device is instructed to perform operations associated with the identified application.
    Type: Application
    Filed: November 21, 2017
    Publication date: April 4, 2019
    Inventors: Manjunath Ramachandra Iyer, Sudha Subarayan
  • Publication number: 20190096394
    Abstract: A method and system for providing dynamic conversation between an application and a user is discussed. The method includes utilizing a computing device to receive a requirement input from the user for the application. The method further includes determining a goal of the user based on the requirement input. Based on the goal, a plurality of conversation threads is initiated with the user, wherein each of the plurality of conversation threads has a degree of association with the goal. Thereafter, a plurality of slots is dynamically generated based on the goal and the plurality of conversation threads. A slot of the plurality of slots stores a data value corresponding to the requirement input of the user.
    Type: Application
    Filed: November 20, 2017
    Publication date: March 28, 2019
    Inventors: MANJUNATH RAMACHANDRA IYER, MEENAKSHI SUNDARAM MURUGESHAN
  • Patent number: 10225603
    Abstract: In one embodiment, a method for rendering multimedia content on a user device is disclosed. The method includes receiving, by a content rendering device, multimedia content from a multimedia server. The multimedia content is stored in the multimedia server in form of chunks. The method includes transforming, by the content rendering device, at least one chunk of the multimedia content based on user preferences. The transforming comprises adjusting time duration of the multimedia content based on a user-defined time duration. Further, the method includes rendering, by the content rendering device, the transformed multimedia content on the user device.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: March 5, 2019
    Assignee: Wipro Limited
    Inventor: Manjunath Ramachandra
  • Publication number: 20190057190
    Abstract: A method and system for providing context based medical instructions to a patient is described. The method includes receiving patient data including patient profile and treatment stage associated with the patient. The method further includes determining a physical state of the patient based on continuous monitoring of activities of the patient. The physical state indicates receptive capability of the patient. Further, the method includes generating context based medical instructions based on the physical state and the patient data. The context based medical instructions are delivered to the patient. Further, the method includes monitoring an emotional state of the patient while the patient is performing the context based medical instructions. The method further includes generating dynamically updated context based medical instructions based on the emotional state of the patient. The emotional state indicates patient's interest for receiving the updated context based medical instructions.
    Type: Application
    Filed: September 28, 2017
    Publication date: February 21, 2019
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan, Ghulam Mohiuddin Khan
  • Publication number: 20190050486
    Abstract: The present disclosure relates to method and system for rendering multimedia content based on interest level of a user in real-time by a content rendering system. The content rendering system comprises detecting interest of user watching multimedia content, broadcasted by content provider based on set of parameters. The interest of user is on portion of multimedia content, determining metadata, object of interest, action and context from portion of multimedia content by processing image containing portion, generating search queries based on object of interest, action and context, extracting content similar to portion, broadcasted by one or more other content providers, based on search queries and metadata and combining extracted similar content with multimedia content currently viewed by user based on metadata to render multimedia content to user based on interest level of user in real-time. The present disclosure renders similar content from multiple content providers based on interest level of users.
    Type: Application
    Filed: September 22, 2017
    Publication date: February 14, 2019
    Inventor: MANJUNATH RAMACHANDRA IYER
  • Publication number: 20190050969
    Abstract: Systems and methods for removing rain streak distortion from a distorted video are described. The system receives sample non-distorted images and sample distorted images of a video. The sample non-distorted images are indicative of non-raining condition and the sample distorted images are indicative of raining condition in the video. The system further determines first temporal information from the sample distorted images and second temporal information from the sample non-distorted images. The first temporal information indicative of a change in the rain streak distortion pattern and the second temporal information indicative of a change in a non-rain streak distortion pattern. Further, the system correlates the first temporal information with the second temporal information to generate a training model comprising one or more trained weights.
    Type: Application
    Filed: September 22, 2017
    Publication date: February 14, 2019
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde
  • Publication number: 20190005128
    Abstract: Disclosed subject matter relates to digital media including a method and system for generating a contextual audio related to an image. An audio generating system may determine scene-theme and viewer theme of scene in the image. Further, audio files matching scene-objects and the contextual data may be retrieved in real-time and relevant audio files from audio files may be identified based on relationship between scene-theme, scene-objects, viewer theme, contextual data and metadata of audio files. A contribution weightage may be assigned to the relevant and substitute audio files based on contextual data and may be correlated based on contribution weightage, thereby generating the contextual audio related to the image. The present disclosure provides a feature wherein the contextual audio generated for an image may provide a holistic audio effect in accordance with context of the image, thus recreating the audio that might have been present when the image was captured.
    Type: Application
    Filed: August 17, 2017
    Publication date: January 3, 2019
    Inventors: Adrita Barari, Manjunath Ramachandra, Ghulam Mohiuddin Khan, Sethuraman Ulaganathan
  • Publication number: 20180376084
    Abstract: A camera for generating distortion free images and a method thereof is disclosed. The camera includes a plurality of lenses, wherein each of the plurality of lenses has a dedicated sensor. The camera further includes a processor communicatively coupled to the plurality of lenses. The camera further includes a memory communicatively coupled to the processor and having instructions stored thereon, causing the processor, on execution to capture a plurality of images through the plurality of lenses and to generate a single distortion free image from the plurality of images based on a deep learning technique trained using a mapping of each of a plurality of sets of low resolution images generated in one or more environments to an associated distortion free image, wherein one or more low resolution images in each of the plurality of sets are distorted.
    Type: Application
    Filed: August 10, 2017
    Publication date: December 27, 2018
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde, Adrita Barari
  • Publication number: 20180349688
    Abstract: The present disclosure relates to method and system for determining intent of subject using behavioural pattern by intent determination system. The intent determination system receiving video data associated with subject from one or more information sources, extracting pre-defined number of femes before target frame from video data of subject, obtaining plurality of behavioural parameters of subject for each of extracted frames, determining score for plurality of behavioural parameters of extracted femes using trained intent model, calculating weighted average score for plurality of behavioural parameters of extracted frames based on score of each of behavioural parameters, analysing emotion of subject for pre-defined number of frames based on weighted average score and determining intent of subject based on emotion of subject analysed. The present disclosure removes the need of human intervention in determining intent of subject.
    Type: Application
    Filed: July 19, 2017
    Publication date: December 6, 2018
    Inventors: ADRITA BARARI, GHULAM MOHIUDDIN KHAN, MANJUNATH RAMACHANDRA