Patents by Inventor Parameswaranath Vadackupurath MANI

Parameswaranath Vadackupurath MANI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11914787
    Abstract: Disclosed is a method for a social interaction by a robot device. The method includes receiving an input from a user, determining an emotional state of the user by mapping the received input with a set of emotions and dynamically interacting with the user based on the determined emotional state in response to the input. Dynamically interacting with the user includes generating contextual parameters based on the determined emotional state. The method includes determining an action in response to the received input based on the generated contextual parameters and performing the determined action. The method further includes receiving another input from the user in response to the performed action and dynamically updating the mapping of the received input with the set of emotions based on the other input for interacting with the user.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: February 27, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kachana Raghunatha Reddy, Vanraj Vala, Barath Raj Kandur Raja, Mohamed Akram Ulla Shariff, Parameswaranath Vadackupurath Mani, Beda Prakash Meher, Mahender Rampelli, Namitha Poojary, Sujay Srinivasa Murthy, Amit Arvind Mankikar, Balabhaskar Veerannagari, Sreevatsa Dwaraka Bhamidipati, Sanjay Ghosh
  • Publication number: 20230251950
    Abstract: Embodiments herein disclose methods and systems for identifying behavioural trends across users. The system includes electronic devices. The electronic devices include a behavioural recommendation controller. The behavioural recommendation controller is configured to: detect a first plurality of activities performed by a plurality of first users in relation with a plurality of contexts; recognize the first plurality of physical activities in relation with the plurality of contexts for the first user; recognize multiple activities performed using smart devices by the first user during each first physical activity in each context; and recognize a second plurality of physical activities performed by multiple concurrent second users during each context to refer current behavior or new behavior of the users.
    Type: Application
    Filed: March 28, 2023
    Publication date: August 10, 2023
    Inventors: Mohamed Akram Ulla SHARIFF, Parameswaranath Vadackupurath Mani, Barath Raj Kandur Raja, Mrinaal Dogra
  • Publication number: 20230088381
    Abstract: A method and system for learning universal vector representation of concepts in a distributed environment comprising a plurality of edge devices are provided. The method includes obtaining data from one or more sources available at the candidate edge device, determining a plurality of concepts from the obtained data, training on-device artificial intelligence (AI) model locally available at the candidate edge device using the plurality of concepts. The method also includes transmitting the at least one trained on-device AI model to a server and receiving a global AI model for deployment from the server. The method further includes deploying the global AI model for universal vector representation of concepts in the candidate edge device.
    Type: Application
    Filed: September 16, 2022
    Publication date: March 23, 2023
    Inventors: Mrinaal DOGRA, Parameswaranath Vadackupurath MANI, Beda Prakash MEHER
  • Patent number: 11449199
    Abstract: The present disclosure relates to a method and a layout generation system for generating dynamic User Interface (UI) layout for an electronic device. The method includes identifying one or more operations related to at least one UI element based on a current state of a display screen of the electronic device, calculating a saliency score and an aesthetic score for each of a plurality of grids determined on the display screen, based on the calculated saliency score and the calculated aesthetic score, identifying a plurality of candidate regions, identifying an optimal region from the plurality of candidate regions based on a user interaction score and generating the dynamic UI layout by performing the one or more operations related to the at least one UI element in the optimal region.
    Type: Grant
    Filed: September 21, 2020
    Date of Patent: September 20, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Parameswaranath Vadackupurath Mani, Mahender Rampelli, Mohamed Akram Ulla Shariff, Shankar Natarajan, Sreevatsa Dwaraka Bhamidipati, Sujay Srinivasa Murthy
  • Publication number: 20220147153
    Abstract: Disclosed is a method for a social interaction by a robot device. The method includes receiving an input from a user, determining an emotional state of the user by mapping the received input with a set of emotions and dynamically interacting with the user based on the determined emotional state in response to the input. Dynamically interacting with the user includes generating contextual parameters based on the determined emotional state. The method includes determining an action in response to the received input based on the generated contextual parameters and performing the determined action. The method further includes receiving another input from the user in response to the performed action and dynamically updating the mapping of the received input with the set of emotions based on the other input for interacting with the user.
    Type: Application
    Filed: December 27, 2021
    Publication date: May 12, 2022
    Inventors: Kachana Raghunatha REDDY, Vanraj VALA, Barath Raj KANDUR RAJA, Mohamed Akram Ulla SHARIFF, Parameswaranath Vadackupurath MANI, Beda Prakash MEHER, Mahender RAMPELLI, Namitha POOJARY, Sujay Srinivasa MURTHY, Amit Arvind MANKIKAR, Balabhaskar VEERANNAGARI, Sreevatsa Dwaraka BHAMIDIPATI, Sanjay GHOSH
  • Patent number: 11209907
    Abstract: Disclosed is a method for a social interaction by a robot device. The method includes receiving an input from a user, determining an emotional state of the user by mapping the received input with a set of emotions and dynamically interacting with the user based on the determined emotional state in response to the input. Dynamically interacting with the user includes generating contextual parameters based on the determined emotional state. The method includes determining an action in response to the received input based on the generated contextual parameters and performing the determined action. The method further includes receiving another input from the user in response to the performed action and dynamically updating the mapping of the received input with the set of emotions based on the other input for interacting with the user.
    Type: Grant
    Filed: September 18, 2018
    Date of Patent: December 28, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kachana Raghunatha Reddy, Vanraj Vala, Barath Raj Kandur Raja, Mohamed Akram Ulla Shariff, Parameswaranath Vadackupurath Mani, Beda Prakash Meher, Mahender Rampelli, Namitha Poojary, Sujay Srinivasa Murthy, Amit Arvind Mankikar, Balabhaskar Veerannagari, Sreevatsa Dwaraka Bhamidipati, Sanjay Ghosh
  • Publication number: 20210089175
    Abstract: The present disclosure relates to a method and a layout generation system for generating dynamic User Interface (UI) layout for an electronic device. The method includes identifying one or more operations related to at least one UI element based on a current state of a display screen of the electronic device, calculating a saliency score and an aesthetic score for each of a plurality of grids determined on the display screen, based on the calculated saliency score and the calculated aesthetic score, identifying a plurality of candidate regions, identifying an optimal region from the plurality of candidate regions based on a user interaction score and generating the dynamic UI layout by performing the one or more operations related to the at least one UI element in the optimal region.
    Type: Application
    Filed: September 21, 2020
    Publication date: March 25, 2021
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Parameswaranath Vadackupurath MANI, Mahender RAMPELLI, Mohamed Akram Ulla SHARIFF, Shankar NATARAJAN, Sreevatsa Dwaraka BHAMIDIPATI, Sujay Srinivasa MURTHY
  • Publication number: 20200005784
    Abstract: A method of outputting a response to a user input in an electronic device is provided. The method includes receiving a user input from a user and, in response to receiving the user input, generating a first response comprising first content based on the user input, obtaining contextual information of the user, generating a second response comprising second content based on the contextual information, the second content being different from the first content, generating a combined response based on the first response and the second response, and outputting the combined response.
    Type: Application
    Filed: June 17, 2019
    Publication date: January 2, 2020
    Inventors: Parameswaranath VADACKUPURATH MANI, Sreevatsa Dwaraka BHAMIDIPATI, Vanraj VALA, Mohamed Akram Ulla SHARIFF, Kachana Raghunatha REDDY, NAMITHA, Mahender RAMPELLI, Beda Prakash MEHER, Sujay Srinivasa MURTHY, Shubham VATSAL
  • Publication number: 20190094980
    Abstract: Disclosed is a method for a social interaction by a robot device. The method includes receiving an input from a user, determining an emotional state of the user by mapping the received input with a set of emotions and dynamically interacting with the user based on the determined emotional state in response to the input. Dynamically interacting with the user includes generating contextual parameters based on the determined emotional state. The method includes determining an action in response to the received input based on the generated contextual parameters and performing the determined action. The method further includes receiving another input from the user in response to the performed action and dynamically updating the mapping of the received input with the set of emotions based on the other input for interacting with the user.
    Type: Application
    Filed: September 18, 2018
    Publication date: March 28, 2019
    Inventors: Kachana Raghunatha REDDY, Vanraj VALA, Barath Raj KANDUR RAJA, Mohamed Akram Ulla SHARIFF, Parameswaranath Vadackupurath MANI, Beda Prakash MEHER, Mahender RAMPELLI, Namitha POOJARY, Sujay Srinivasa MURTHY, Amit Arvind MANKIKAR, Balabhaskar VEERANNAGARI, Sreevatsa Dwaraka BHAMIDIPATI, Sanjay GHOSH