Patents by Inventor Sethuraman Ulaganathan

Sethuraman Ulaganathan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11687825
    Abstract: The present invention discloses method and virtual assistance system for determining response to user queries. The virtual assistance system receives data comprising plurality of interaction between one or more users and one or more real agents for resolving query, where data is classified into user and real agent data. Entities and intent identified from each of the classified user data and real agent data are classified using predefined domain model. The entities and intent are combined to identify plurality of sequence of resolution data. Based on classification of entities and intent, virtual assistance system determines first set of resolution data and second set of resolution data. Thereafter, each of plurality of sequence of resolution data is clustered into one or more category associated with type of users, based on first set and second set of resolution data, parameters associated with users and historical resolution data used for responding to query.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: June 27, 2023
    Assignee: Wipro Limited
    Inventors: Ghulam Mohiuddin Khan, Deepanker Singh, Sethuraman Ulaganathan
  • Patent number: 11546403
    Abstract: Disclosed herein is a method and system for providing personalized content to a user. The method comprises categorizing original content to be provided to user into a plurality of data packets. The data packets include data of similar domain. The user is categorized into one of plurality of classes and a vocabulary of words suitable for the class is identified. The class is associated with a domain. The system identifies relevant content for the class. Thereafter, the system modifies the original content by either by inserting a new data packet or deleting a data packet. A target content is generated for the class based on vocabulary of words associated with class and modified original content. Thereafter, the target content is provided to the class by incorporating one or more features of a presenter for presenting the target content. The present disclosure enhances user experience by personalizing content for the user.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: January 3, 2023
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Patent number: 11366988
    Abstract: This disclosure relates to method and system for of dynamically annotating data or validating annotated data. The method may include receiving input data comprising a plurality of input data points. The method may further include one of: a) generating a plurality of annotations for each of the plurality of input data points using at least one of a state-label mapping model and a comparative ANN model, or b) receiving the plurality of annotations for each of the plurality of input data points from an external device or from a user, and validating the plurality of annotations using at least one of the state-label mapping model and the comparative artificial neural network (ANN) model.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: June 21, 2022
    Assignee: Wipro Limited
    Inventors: Ghulam Mohiuddin Khan, Deepanker Singh, Sethuraman Ulaganathan
  • Patent number: 11087183
    Abstract: A method and system of multi-modality classification using augmented data is disclosed. The method includes generating a pattern for each of a plurality of augmented data associated with each of a plurality of object classes, based on at least one modality associated with each of the plurality of objects classes using a Long Term Short Memory (LSTM) classifier and a Layer-wise Relevance Propagation (LRP). The method further includes classifying an input image into a first object class of the plurality of object classes based on one or more objects within the input image using a Convolution Neural Network (CNN). The method further includes re-classifying the input image into one of the first object class or a second object class of the plurality of object classes when the accuracy of classification by the CNN into the first object class is below a matching threshold.
    Type: Grant
    Filed: August 14, 2019
    Date of Patent: August 10, 2021
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Patent number: 11030815
    Abstract: The invention relates generally to Virtual Reality (VR) and more particularly to method and system for rendering VR content. The method includes identifying user interaction with at least one object within a VR environment. The method further includes training a deep learning feature extraction model to identify predetermined and undetermined interactions in the VR environment. The deep learning feature extraction model is trained based on a plurality of scene images and associated applied templates that are provided to the deep learning feature extraction model. Each of the applied templates identifies at least one spurious object and at least one object of interest in an associated scene image. The method includes classifying the user interaction as one of a predetermined interaction and an undetermined interaction based on the deep learning feature extraction model. The method further includes rendering a VR content in response to the user interaction being classified.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: June 8, 2021
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Publication number: 20210074064
    Abstract: A method and system for rendering Virtual Reality (VR) content is disclosed. The method may include identifying user interaction with at least one object within a VR environment. The method may further include training a deep learning feature extraction model to identify predetermined and undetermined interactions in the VR environment. The deep learning feature extraction model is trained based on a plurality of scene images and associated applied templates that are provided to the deep learning feature extraction model. Each of the applied templates identifies at least one spurious object and at least one object of interest in an associated scene image. The method may include classifying the user interaction as one of a predetermined interaction and an undetermined interaction based on the deep learning feature extraction model. The method may further include rendering a VR content in response to the user interaction being classified.
    Type: Application
    Filed: October 22, 2019
    Publication date: March 11, 2021
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Publication number: 20200410283
    Abstract: A method and system of multi-modality classification using augmented data is disclosed. The method includes generating a pattern for each of a plurality of augmented data associated with each of a plurality of object classes, based on at least one modality associated with each of the plurality of objects classes using a Long Term Short Memory (LSTM) classifier and a Layer-wise Relevance Propagation (LRP). The method further includes classifying an input image into a first object class of the plurality of object classes based on one or more objects within the input image using a Convolution Neural Network (CNN). The method further includes re-classifying the input image into one of the first object class or a second object class of the plurality of object classes when the accuracy of classification by the CNN into the first object class is below a matching threshold.
    Type: Application
    Filed: August 14, 2019
    Publication date: December 31, 2020
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Publication number: 20200387825
    Abstract: The present invention discloses method and virtual assistance system for determining response to user queries. The virtual assistance system receives data comprising plurality of interaction between one or more users and one or more real agents for resolving query, where data is classified into user and real agent data. Entities and intent identified from each of the classified user data and real agent data are classified using predefined domain model. The entities and intent are combined to identify plurality of sequence of resolution data. Based on classification of entities and intent, virtual assistance system determines first set of resolution data and second set of resolution data. Thereafter, each of plurality of sequence of resolution data is clustered into one or more category associated with type of users, based on first set and second set of resolution data, parameters associated with users and historical resolution data used for responding to query.
    Type: Application
    Filed: August 23, 2019
    Publication date: December 10, 2020
    Inventors: Ghulam Mohiuddin KHAN, Deepanker Singh, Sethuraman Ulaganathan
  • Publication number: 20200380312
    Abstract: This disclosure relates to method and system for of dynamically annotating data or validating annotated data. The method may include receiving input data comprising a plurality of input data points. The method may further include one of: a) generating a plurality of annotations for each of the plurality of input data points using at least one of a state-label mapping model and a comparative ANN model, or b) receiving the plurality of annotations for each of the plurality of input data points from an external device or from a user, and validating the plurality of annotations using at least one of the state-label mapping model and the comparative artificial neural network (ANN) model.
    Type: Application
    Filed: July 11, 2019
    Publication date: December 3, 2020
    Inventors: Ghulam Mohiuddin KHAN, Deepanker Singh, Sethuraman Ulaganathan
  • Publication number: 20200310608
    Abstract: Systems and methods of providing real-time assistance to a presenter for rendering of a content are disclosed. In one embodiment, the method may include receiving a multi-modal input from the presenter with respect to the rendering of the content by the presenter, and performing a real-time analysis of at least one of the multi-modal input or a historical rendering of the content by the presenter. The method may further include dynamically determining a need for providing assistance to the presenter based on the real-time analysis. The method may further include dynamically generating, in response to the need, a supporting visual content based on the real-time analysis and a plurality of contents in a content database, and dynamically rendering the supporting visual content on a rendering device in possession of the presenter based on the real-time analysis.
    Type: Application
    Filed: June 17, 2019
    Publication date: October 1, 2020
    Inventors: Sethuraman ULAGANATHAN, Manjunath Ramachandra
  • Patent number: 10771807
    Abstract: A method and system for compressing videos using deep learning is disclosed. The method includes segmenting each of a plurality of frames associated with a video into a plurality of super blocks. The method further includes determining a block size for partition of each of the plurality of super blocks into a plurality of sub blocks, based on a feature of each of the plurality of super blocks using a Convolutional Neural Network (CNN). The method further includes generating a prediction data for each of the plurality of sub blocks based on a motion vector predicted and learned by the CNN. The method further includes determining a residual data for each of the plurality of sub blocks by subtracting the prediction data from an associated original data. The method includes generating a transformed quantized residual data using each of a transformation algorithm and a quantization algorithm.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: September 8, 2020
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Publication number: 20200213379
    Abstract: Disclosed herein is a method and system for providing personalized content to a user. The method comprises categorizing original content to be provided to user into a plurality of data packets. The data packets include data of similar domain. The user is categorized into one of plurality of classes and a vocabulary of words suitable for the class is identified. The class is associated with a domain. The system identifies relevant content for the class. Thereafter, the system modifies the original content by either by inserting a new data packet or deleting a data packet. A target content is generated for the class based on vocabulary of words associated with class and modified original content. Thereafter, the target content is provided to the class by incorporating one or more features of a presenter for presenting the target content. The present disclosure enhances user experience by personalizing content for the user.
    Type: Application
    Filed: February 22, 2019
    Publication date: July 2, 2020
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Patent number: 10689110
    Abstract: This disclosure relates generally to drones, and more particularly to method and system for performing inspection and maintenance tasks of three-dimensional structures (3D) using drones. In one embodiment, a method for performing a task with respect to a 3D structure is disclosed. The method includes receiving a simulated 3D view of the 3D structure. The simulated 3D view comprises a hierarchy of augmented views to different degrees. The method further includes configuring one or more paths for performing a task on the 3D structure based on the hierarchy of augmented views, historical data on substantially similar tasks, and a capability of the at least one drone. The method further includes learning maneuverability and operations with respect to the one or more paths and the task based on the historical data on substantially similar tasks, and effecting performance of the task based on the learning through the at least one drone.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: June 23, 2020
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Publication number: 20200174999
    Abstract: Disclosed herein is a method and an information updating system for dynamically updating a product manual. In an embodiment, information related to product are received and analyzed to identify issues in handling the product. Thereafter, resolution information required for resolving the identified issues is extracted and compared with existing resolution information in the product manual to identify a missing portion of the product manual. Subsequently, the product manual is updated with the missing portion based on a logical resolution graph of the product manual. In an embodiment, the present disclosure helps in building comprehensive and reliable product manuals, thereby enhancing usability of the products.
    Type: Application
    Filed: January 30, 2019
    Publication date: June 4, 2020
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Patent number: 10621630
    Abstract: The present disclosure relates to a method and system for obtaining interactive user feedback in real-time by feedback obtaining system. The feedback obtaining system establishes connection between user device of user and server of service provider based on user location received from user device, receives static data of user from server and dynamic data of user from capturing device located at site of service provider, identify contextual information associated with user based on static data and dynamic data, provide one or more feedback queries for user from database based on contextual information, provide one or more sub-feedback queries for user based on response of user for one or more feedback queries and obtains user feedback based on response of user for one or more sub-feedback queries and one or more feedback queries and implicit feedback. The use of implicit feedback together with actual feedback gives effective feedback of users.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: April 14, 2020
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Sethuraman Ulaganathan
  • Patent number: 10587828
    Abstract: A camera for generating distortion free images and a method thereof is disclosed. The camera includes a plurality of lenses, wherein each of the plurality of lenses has a dedicated sensor. The camera further includes a processor communicatively coupled to the plurality of lenses. The camera further includes a memory communicatively coupled to the processor and having instructions stored thereon, causing the processor, on execution to capture a plurality of images through the plurality of lenses and to generate a single distortion free image from the plurality of images based on a deep learning technique trained using a mapping of each of a plurality of sets of low resolution images generated in one or more environments to an associated distortion free image, wherein one or more low resolution images in each of the plurality of sets are distorted.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: March 10, 2020
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra, Prasanna Hegde, Adrita Barari
  • Patent number: 10459687
    Abstract: A method and system are described for controlling an Internet of Things (IoT) device using multi-modal gesture commands. The method includes receiving one or more multi-modal gesture commands comprising at least one of one or more personalized gesture commands and one or more personalized voice commands of a user. The method includes detecting one or more multi-modal gesture commands using at least one of a gesture grammar database and a voice grammar database. The method includes determining one or more control parameters and IoT device status information associated with a plurality of IoT devices in response to the detection. The method includes identifying IoT device that user intends to control from plurality of IoT devices based on user requirement, IoT device status information, and line of sight information associated with user. The method includes controlling identified IoT device based on one or more control parameters and IoT device status information.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: October 29, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Patent number: 10423659
    Abstract: Disclosed subject matter relates to digital media including a method and system for generating a contextual audio related to an image. An audio generating system may determine scene-theme and viewer theme of scene in the image. Further, audio files matching scene-objects and the contextual data may be retrieved in real-time and relevant audio files from audio files may be identified based on relationship between scene-theme, scene-objects, viewer theme, contextual data and metadata of audio files. A contribution weightage may be assigned to the relevant and substitute audio files based on contextual data and may be correlated based on contribution weightage, thereby generating the contextual audio related to the image. The present disclosure provides a feature wherein the contextual audio generated for an image may provide a holistic audio effect in accordance with context of the image, thus recreating the audio that might have been present when the image was captured.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: September 24, 2019
    Assignee: Wipro Limited
    Inventors: Adrita Barari, Manjunath Ramachandra, Ghulam Mohiuddin Khan, Sethuraman Ulaganathan
  • Publication number: 20190248485
    Abstract: This disclosure relates generally to drones, and more particularly to method and system for performing inspection and maintenance tasks of three-dimensional structures (3D) using drones. In one embodiment, a method for performing a task with respect to a 3D structure is disclosed. The method includes receiving a simulated 3D view of the 3D structure. The simulated 3D view comprises a hierarchy of augmented views to different degrees. The method further includes configuring one or more paths for performing a task on the 3D structure based on the hierarchy of augmented views, historical data on substantially similar tasks, and a capability of the at least one drone. The method further includes learning maneuverability and operations with respect to the one or more paths and the task based on the historical data on substantially similar tasks, and effecting performance of the task based on the learning through the at least one drone.
    Type: Application
    Filed: March 27, 2018
    Publication date: August 15, 2019
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra
  • Patent number: 10304455
    Abstract: Disclosed herein is a method and system for performing a task based on user input. One or more requirements related to the task are extracted from the user input. Based on the requirements, plurality of resources required for performing the task are retrieved and integrated to generate action sequences. Further, a simulated model is generated based on the action sequences and provided to the user for receiving user feedback. Finally, the action sequences are implemented based on the user feedback for performing the task. In an embodiment, the method of present disclosure is capable of automatically selecting and integrating resources required for implementing a task, thereby helps in reducing overall time required for implementing a task intended by the user.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: May 28, 2019
    Assignee: Wipro Limited
    Inventors: Sethuraman Ulaganathan, Manjunath Ramachandra