Patents by Inventor Gopichand Agnihotram

Gopichand Agnihotram has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11978165
    Abstract: The disclosure relates to system and method for generating recommendations for capturing images of a real-life object with essential features. The method includes detecting an Augmented Reality (AR) plane for a target object. The method further includes capturing a set of poses corresponding to the target object and a set of coordinate points in the AR plane. The set of poses includes a tracking marker, and the set of coordinate points indicates a location of the target object. The method further includes determining an instant distance between the AR imaging device and the target object, and an instant angle of the AR imaging device with respect to the target object. The method further includes dynamically generating the recommendations for adjusting a position and an orientation of the AR imaging device with respect to the target object.
    Type: Grant
    Filed: June 1, 2022
    Date of Patent: May 7, 2024
    Assignee: Wipro Limited
    Inventors: Shrivardhan Satish Suryawanshi, Gopichand Agnihotram, Vivek Kumar Varma Nadimpalli
  • Publication number: 20230316664
    Abstract: The disclosure relates to system and method for generating recommendations for capturing images of a real-life object with essential features. The method includes detecting an Augmented Reality (AR) plane for a target object. The method further includes capturing a set of poses corresponding to the target object and a set of coordinate points in the AR plane. The set of poses includes a tracking marker, and the set of coordinate points indicates a location of the target object. The method further includes determining an instant distance between the AR imaging device and the target object, and an instant angle of the AR imaging device with respect to the target object. The method further includes dynamically generating the recommendations for adjusting a position and an orientation of the AR imaging device with respect to the target object.
    Type: Application
    Filed: June 1, 2022
    Publication date: October 5, 2023
    Inventors: Shrivardhan Satish SURYAWANSHI, Gopichand AGNIHOTRAM, Vivek Kumar Varma NADIMPALLI
  • Patent number: 11756297
    Abstract: The disclosure relates to system and method for providing assistance to a user using augmented reality. The method includes acquiring a video stream and a set of data associated with a task being performed by a user, in real-time, using a camera and/or a sensor device. The video stream includes sequential frames. The method further includes determining a present state associated with the task based on the sequential frames using an Artificial Neural Network (ANN) based action prediction model; determining scenarios and events corresponding to the scenarios based on the video stream and the set of data using an ANN based augmented intelligence model; and determining sequential instructions required for assisting the user to accomplish the task, dynamically, based on the present state and the events associated with the task, using at least one of a rule-based engine and an ANN based instruction prediction model.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: September 12, 2023
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Publication number: 20230252768
    Abstract: A method and content annotating system for dynamically generating annotated content for training model for AR-VR applications. The content annotating system receives plurality of images for object associated with AR-VR applications. The content annotating system obtains pre-annotated datasets related to the plurality of images from user. The content annotating system generates plurality of augmented image datasets and extracts set of features from the pre-annotated datasets and the plurality of augmented image datasets. The content annotating systems compares the sets of features to identify ROIs on the plurality of augmented image datasets. Further, the content annotating system generates annotated content for the plurality of augmented image datasets based on comparison. The annotated content and the pre-annotated datasets are used to train model associated with AR-VR applications.
    Type: Application
    Filed: March 31, 2022
    Publication date: August 10, 2023
    Inventors: Gopichand AGNIHOTRAM, Shrivardhan Satish SURYAWANSHI
  • Patent number: 11593973
    Abstract: A method and a system for Augmented Reality (AR) content creation is disclosed. The method includes creating a feature vector corresponding to each of a sequence of frames extracted from a video, based on a plurality of features captured. The method further includes determining a vector distance between each of two consecutive frames from the sequence of frames, based on the feature vector associated with each of the two consecutive frames. The method further includes dividing the video into a plurality of frames based on the determined vector distance. The method further includes creating a storyline based on an object and an action associated with the object in each of the plurality of frames, and generating a set of instructions for a user based on the storyline created for each of the plurality of frames and real-time video stream capturing a current state of a user environment.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: February 28, 2023
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Patent number: 11562206
    Abstract: This disclosure relates to method and system for providing personalized driving or navigation assistance. The method may include receiving sensory data with respect to a vehicle from a plurality of sensors and multi-channel input data with respect to one or more passengers inside the vehicle from a plurality of onboard monitoring devices, performing fusion of the sensory data and the multi-channel input data to generate multimodal fusion data, determining one or more contextual events based on the multi-modal fusion data using a machine learning model, wherein the machine learning model is trained using an incremental learning process and comprises a supervised machine learning model and an unsupervised machine learning model, analysing the one or more contextual events to generate a personalized driving recommendation, and providing the personalized driving recommendation to a driver passenger or a navigation device.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: January 24, 2023
    Assignee: Wipro Limited
    Inventor: Gopichand Agnihotram
  • Patent number: 11442975
    Abstract: The present invention relates to a method for generating abstractive summary. The method comprises receiving a query for generating an abstractive summary from a document and splitting the query into one or more lexical units. Further, a semantic graph and a graph index is generated based on a role assigned to the one or more lexical units. Furthermore, a measure of information is determined for the retrieved one or more sentences. The one or more sentences having a semantic graph analogous to the generated semantic graph of the query are retrieved from the document. Finally, at least one of re-ordering and re-phrasing is performed on at least one of the retrieved one or more sentences based on the computed measure of information and the one or more lexical units in the retrieved one or more sentences to generate the abstractive summary.
    Type: Grant
    Filed: July 18, 2019
    Date of Patent: September 13, 2022
    Assignee: Wipro Limited
    Inventors: Gopichand Agnihotram, Meenakshi Sundaram Murugeshan
  • Patent number: 11416532
    Abstract: A method of identifying relevant keywords from a document is disclosed. The method includes splitting text of the document into a plurality of keyword samples, such that each of the plurality of keyword samples comprises a predefined number of keywords extracted in a sequence. Further, each pair of adjacent keyword samples in the plurality of samples includes a plurality of common words. The method further includes determining a relevancy score for each of the plurality of keyword samples based on at least one of a trained Convolution Neural Network (CNN) model and a keyword repository. The method further includes classifying keywords from each of the plurality of keyword samples as relevant keywords or non-relevant keywords based on the relevancy score determined for each of the plurality of keyword samples.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: August 16, 2022
    Assignee: Wipro Limited
    Inventors: Gopichand Agnihotram, Suyog Trivedi, Rajesh Kumar
  • Patent number: 11386712
    Abstract: The present invention discloses method and system for multimodal analysis based emotion recognition. The method comprising segmenting video data of a user into a plurality of video segments. A plurality of visual features, voice features and text features from the plurality of video segments is extracted. Autocorrelation values among each of the plurality of visual features, the voice features, and the text features is determined. Each of the plurality of visual features, the voice features and the text features is aligned based on video segment identifier and the autocorrelation values to obtain a plurality of aligned multimodal features. One of two classes of emotions is determined for each of the plurality of aligned multimodal features. The determined emotion for each of the plurality of aligned multimodal features is compared with historic multimodal features from a database, and emotion of the user is determined at real time based on comparison.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: July 12, 2022
    Assignee: Wipro Limited
    Inventors: Rahul Yadav, Gopichand Agnihotram
  • Patent number: 11361491
    Abstract: The present invention relates to a method of generating a facial expression of a user for a virtual environment. The method comprises obtaining a video and an associated speech of the user. Further, extracting in real-time at least one of one or more voice features and one or more text features based on the speech. Furthermore, identifying one or more phonemes in the speech. Thereafter, determining one or more facial features relating to the speech of the user using a pre-trained second learning model based on the one or more voice features, the one or more phonemes, the video and one or more previously generated facial features of the user. Finally, generating the facial expression of the user corresponding to the speech for an avatar representing the user in the virtual environment.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: June 14, 2022
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Patent number: 11315040
    Abstract: The disclosure relates to system and method for detecting an instance of lie using a Machine Learning (ML) model. In one example, the method may include extracting a set of features from an input data received from a plurality of data sources at predefined time intervals and combining the set of features from each of the plurality of data sources to obtain a multimodal data. The method may further include processing the multimodal data through an ML model to generate a label for the multimodal data. The label is generated based on a confidence score of the ML model. The label is one of a true value that corresponds to an instance of truth or a false value that corresponds to an instance of lie.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: April 26, 2022
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Patent number: 11232637
    Abstract: The present invention discloses a method and a system for rendering content in low light condition for field assistance. The method comprising receiving real-time input data from a user device in a low light condition, identifying at least one object from the real-time input data and corresponding operational state of the at least one object based on a correlation of the at least one object in the input data and corresponding operational state of the at least object with pre-stored objects and corresponding operational state of the pre-stored objects, predicting at least one action to be performed on the identified at least one object, extracting an Augmented Reality (AR) object associated with the identified at least one object on which the selected action is to be performed, and rendering the location, the selected at least one action to be performed, and the AR object on the user device.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: January 25, 2022
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Patent number: 11182560
    Abstract: A method and system of language independent iterative learning mechanism for Natural Language Processing (NLP) tasks is disclosed. The method includes identifying at least one NLP feature associated with a set of words within a sentence for an NLP task. The method includes creating a pattern associated with the sentence for the NLP task, based on the at least one NLP feature associated with the set of words and the linkage relationship between each subset of two adjacent words. The method further includes computing a confidence score corresponding to the pattern, based on a comparison within a trained dataset. The method further includes assigning a pattern category to the pattern, based on the confidence score and a predefined threshold score. The method further includes executing the NLP task based on the assigned pattern category.
    Type: Grant
    Filed: March 30, 2019
    Date of Patent: November 23, 2021
    Assignee: Wipro Limited
    Inventors: Balaji Jagan, Gopichand Agnihotram, Meenakshi Sundaram Murugeshan
  • Patent number: 11110940
    Abstract: A method and driver assistance system for generating touch-based alerts to a driver in a vehicle is disclosed. The driver assistance system receives frames including the driver in vehicle and detects position of face of the driver in plurality of frames. Facial attributes of the driver are identified based on position of the face and one or more eye attributes of the driver are determined based on the identified one or more facial attributes. Based on the facial attributes and the one or more eye attribute sensory information is estimated from a plurality of sensory data. Further, information regarding haptic sensation is computed for the driver based on estimated sensory information and a position of hands of the driver received from a hand tracking device. Thereafter, an alert is generated using signals relating to the haptic sensation to the driver for taking one or more corrective measure.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: September 7, 2021
    Assignee: Wipro Limited
    Inventors: Vivek Kumar Varma Nadimpalli, Gopichand Agnihotram
  • Publication number: 20210256764
    Abstract: The present invention discloses a method and a system for rendering content in low light condition for field assistance. The method comprising receiving real-time input data from a user device in a low light condition, identifying at least one object from the real-time input data and corresponding operational state of the at least one object based on a correlation of the at least one object in the input data and corresponding operational state of the at least object with pre-stored objects and corresponding operational state of the pre-stored objects, predicting at least one action to be performed on the identified at least one object, extracting an Augmented Reality (AR) object associated with the identified at least one object on which the selected action is to be performed, and rendering the location, the selected at least one action to be performed, and the AR object on the user device.
    Type: Application
    Filed: March 30, 2020
    Publication date: August 19, 2021
    Inventors: Vivek Kumar Varma NADIMPALLI, Gopichand AGNIHOTRAM
  • Publication number: 20210248511
    Abstract: The disclosure relates to system and method for detecting an instance of lie using a Machine Learning (ML) model. In one example, the method may include extracting a set of features from an input data received from a plurality of data sources at predefined time intervals and combining the set of features from each of the plurality of data sources to obtain a multimodal data. The method may further include processing the multimodal data through an ML model to generate a label for the multimodal data. The label is generated based on a confidence score of the ML model. The label is one of a true value that corresponds to an instance of truth or a false value that corresponds to an instance of lie.
    Type: Application
    Filed: March 26, 2020
    Publication date: August 12, 2021
    Inventors: Vivek Kumar Varma NADIMPALLI, Gopichand AGNIHOTRAM
  • Patent number: 11087091
    Abstract: Disclosed herein is a method and response generation system for providing contextual responses to user interaction. In an embodiment, input data related to user interaction, which may be received from a plurality of input channels in real-time, may be processed using processing models corresponding to each of the input channels for extracting interaction parameters. Thereafter, the interaction parameters may be combined for computing a contextual variable, which in turn may be analyzed to determine a context of the user interaction. Finally, responses corresponding to the context of the user interaction may be generated and provided to the user for completing the user interaction. In some embodiments, the method of present disclosure accurately detects context of the user interaction and provides meaningful contextual responses to the user interaction.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: August 10, 2021
    Assignee: Wipro Limited
    Inventors: Gopichand Agnihotram, Rajesh Kumar, Pandurang Naik
  • Publication number: 20210201004
    Abstract: The present invention discloses method and system for multimodal analysis based emotion recognition. The method comprising segmenting video data of a user into a plurality of video segments. A plurality of visual features, voice features and text features from the plurality of video segments is extracted. Autocorrelation values among each of the plurality of visual features, the voice features, and the text features is determined. Each of the plurality of visual features, the voice features and the text features is aligned based on video segment identifier and the autocorrelation values to obtain a plurality of aligned multimodal features. One of two classes of emotions is determined for each of the plurality of aligned multimodal features. The determined emotion for each of the plurality of aligned multimodal features is compared with historic multimodal features from a database, and emotion of the user is determined at real time based on comparison.
    Type: Application
    Filed: February 20, 2020
    Publication date: July 1, 2021
    Inventors: Rahul Yadav, Gopichand Agnihotram
  • Publication number: 20210171056
    Abstract: A method and driver assistance system for generating touch-based alerts to a driver in a vehicle is disclosed. The driver assistance system receives frames including the driver in vehicle and detects position of face of the driver in plurality of frames. Facial attributes of the driver are identified based on position of the face and one or more eye attributes of the driver are determined based on the identified one or more facial attributes. Based on the facial attributes and the one or more eye attribute sensory information is estimated from a plurality of sensory data. Further, information regarding haptic sensation is computed for the driver based on estimated sensory information and a position of hands of the driver received from a hand tracking device. Thereafter, an alert is generated using signals relating to the haptic sensation to the driver for taking one or more corrective measure.
    Type: Application
    Filed: February 5, 2020
    Publication date: June 10, 2021
    Inventors: Vivek Kumar Varma NADIMPALLI, Gopichand AGNIHOTRAM
  • Patent number: 10990579
    Abstract: The present disclosure discloses a method and system for providing response to a user input. The system receives a user input, processes the user input by finding equivalents of the user input and dividing each of the user input and the equivalents into a frame. One or more keywords are generated for each of the one or more frames. Further, each of the one or more frames are classified into one or more domains present in a knowledge graph. Then, one or more objects are determined in each of the corresponding one or more domains based on the corresponding one or more keywords. Further, a processing means is determined for each of the one or more objects based on the metadata of the corresponding one or more objects. The processing means is processed by the system for providing response to the user input.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: April 27, 2021
    Assignee: Wipro Limited
    Inventors: Manjunath Ramachandra Iyer, Suyog Trivedi, Gopichand Agnihotram