Patents by Inventor Sethuraman Ulaganathan

Sethuraman Ulaganathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10298895
    Abstract: Disclosed herein is a method and system for performing context-based transformation of a video. In an embodiment, a scene descriptor and a textual descriptor are generated for each scene corresponding to the video. Further, an audio context descriptor is generated based on semantic analysis of the textual descriptor. Subsequently, the audio context descriptor and the scene descriptor are correlated to generate a scene context descriptor for each scene. Finally, the video is translated using the scene context descriptor, thereby transforming the video based on context. In some embodiments, the method of present disclosure is capable of automatically changing one or more attributes, such as color of one or more scenes in the video, in response to change in the context of audio/speech signals corresponding to the video. Thus, the present method helps in effective rendering of a video to users.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: May 21, 2019
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra, Sethuraman Ulaganathan
  • Patent number: 10282827
    Abstract: Systems and methods for removing rain streak distortion from a distorted video are described. The system receives sample non-distorted images and sample distorted images of a video. The sample non-distorted images are indicative of non-raining condition and the sample distorted images are indicative of raining condition in the video. The system further determines first temporal information from the sample distorted images and second temporal information from the sample non-distorted images. The first temporal information indicative of a change in the rain streak distortion pattern and the second temporal information indicative of a change in a non-rain streak distortion pattern. Further, the system correlates the first temporal information with the second temporal information to generate a training model comprising one or more trained weights.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: May 7, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde
  • Patent number: 10283163
    Abstract: The present disclosure discloses method and video generation system for generating video content based on user data. The video generation system receives user data sequentially from user, where each sequence of user data is converted into text data. One or more objects, relations, emotions, and actions from user data is identified by evaluating text data, a scene descriptor is generated for each sequence of user data, by associating one or more objects with one or more relations, emotions, and actions. The method comprises performing consistency check for scene descriptor of each sequence of user data, based on one or more previously stored scene descriptors, performing, one or more modifications to inconsistent scene descriptors, identified based on consistency check, generating, segments for each of scene descriptor and generating video content for by combining video segments associated with each of scene descriptor.
    Type: Grant
    Filed: March 31, 2018
    Date of Patent: May 7, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Publication number: 20190057190
    Abstract: A method and system for providing context based medical instructions to a patient is described. The method includes receiving patient data including patient profile and treatment stage associated with the patient. The method further includes determining a physical state of the patient based on continuous monitoring of activities of the patient. The physical state indicates receptive capability of the patient. Further, the method includes generating context based medical instructions based on the physical state and the patient data. The context based medical instructions are delivered to the patient. Further, the method includes monitoring an emotional state of the patient while the patient is performing the context based medical instructions. The method further includes generating dynamically updated context based medical instructions based on the emotional state of the patient. The emotional state indicates patient's interest for receiving the updated context based medical instructions.
    Type: Application
    Filed: September 28, 2017
    Publication date: February 21, 2019
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan, Ghulam Mohiuddin Khan
  • Publication number: 20190050969
    Abstract: Systems and methods for removing rain streak distortion from a distorted video are described. The system receives sample non-distorted images and sample distorted images of a video. The sample non-distorted images are indicative of non-raining condition and the sample distorted images are indicative of raining condition in the video. The system further determines first temporal information from the sample distorted images and second temporal information from the sample non-distorted images. The first temporal information indicative of a change in the rain streak distortion pattern and the second temporal information indicative of a change in a non-rain streak distortion pattern. Further, the system correlates the first temporal information with the second temporal information to generate a training model comprising one or more trained weights.
    Type: Application
    Filed: September 22, 2017
    Publication date: February 14, 2019
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde
  • Publication number: 20190005128
    Abstract: Disclosed subject matter relates to digital media including a method and system for generating a contextual audio related to an image. An audio generating system may determine scene-theme and viewer theme of scene in the image. Further, audio files matching scene-objects and the contextual data may be retrieved in real-time and relevant audio files from audio files may be identified based on relationship between scene-theme, scene-objects, viewer theme, contextual data and metadata of audio files. A contribution weightage may be assigned to the relevant and substitute audio files based on contextual data and may be correlated based on contribution weightage, thereby generating the contextual audio related to the image. The present disclosure provides a feature wherein the contextual audio generated for an image may provide a holistic audio effect in accordance with context of the image, thus recreating the audio that might have been present when the image was captured.
    Type: Application
    Filed: August 17, 2017
    Publication date: January 3, 2019
    Inventors: Adrita Barari, Manjunath Ramachandra, Ghulam Mohiuddin Khan, Sethuraman Ulaganathan
  • Publication number: 20180376084
    Abstract: A camera for generating distortion free images and a method thereof is disclosed. The camera includes a plurality of lenses, wherein each of the plurality of lenses has a dedicated sensor. The camera further includes a processor communicatively coupled to the plurality of lenses. The camera further includes a memory communicatively coupled to the processor and having instructions stored thereon, causing the processor, on execution to capture a plurality of images through the plurality of lenses and to generate a single distortion free image from the plurality of images based on a deep learning technique trained using a mapping of each of a plurality of sets of low resolution images generated in one or more environments to an associated distortion free image, wherein one or more low resolution images in each of the plurality of sets are distorted.
    Type: Application
    Filed: August 10, 2017
    Publication date: December 27, 2018
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde, Adrita Barari
  • Publication number: 20180285062
    Abstract: A method and system are described for controlling an Internet of Things (IoT) device using multi-modal gesture commands. The method includes receiving one or more multi-modal gesture commands comprising at least one of one or more personalized gesture commands and one or more personalized voice commands of a user. The method includes detecting one or more multi-modal gesture commands using at least one of a gesture grammar database and a voice grammar database. The method includes determining one or more control parameters and IoT device status information associated with a plurality of IoT devices in response to the detection. The method includes identifying IoT device that user intends to control from plurality of IoT devices based on user requirement, IoT device status information, and line of sight information associated with user. The method includes controlling identified IoT device based on one or more control parameters and IoT device status information.
    Type: Application
    Filed: March 28, 2017
    Publication date: October 4, 2018
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Publication number: 20180232785
    Abstract: The present disclosure relates to a method and system for obtaining interactive user feedback in real-time by feedback obtaining system. The feedback obtaining system establishes connection between user device of user and server of service provider based on user location received from user device, receives static data of user from server and dynamic data of user from capturing device located at site of service provider, identify contextual information associated with user based on static data and dynamic data, provide one or more feedback queries for user from database based on contextual information, provide one or more sub-feedback queries for user based on response of user for one or more feedback queries and obtains user feedback based on response of user for one or more sub-feedback queries and one or more feedback queries and implicit feedback. The use of implicit feedback together with actual feedback gives effective feedback of users.
    Type: Application
    Filed: March 31, 2017
    Publication date: August 16, 2018
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan