Patents by Inventor PRASHANT IYENGAR

PRASHANT IYENGAR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12007777
    Abstract: A system for behaviour mapping and categorization of objects and users in an 3D environment for creating and learning user behaviour map is provided. The system includes a robot 102, a network 104 and a central AI system 106. The robot 102 is embedded with an array of acoustic sensors 108 and visual sensors 110 for behaviour mapping and categorization the objects and users in the 3D environment and generates an auditory behaviour map and a visual behaviour map based on sensory inputs from the acoustic sensors 108 and visual sensors 110. The robot 102 transmits the acoustic source sensory input and the visual source sensory input to the central AI system 106 over the network 104 for generating a global behaviour map. The central AI system 106 tunes the global behaviour map to a specific user by tuning the detection and classification model to data obtained from a specific 3D environment that corresponds to the specific user.
    Type: Grant
    Filed: February 21, 2021
    Date of Patent: June 11, 2024
    Assignee: RN CHIDAKASHI TECHNOLOGIES PVT. LTD.
    Inventors: Sneh Vaswani, Prashant Iyengar, Chintan Raikar, Ravi Vaidyanathan
  • Publication number: 20240185323
    Abstract: A system and method for retail assistance system (102) for assisting customers while shopping in a retail store. The retail assistance system (102) is configured to detect one or more customers entering the retail store using an input unit, determine a personality profile of the one or more customers by analyzing a facial expression and one or more personal attributes of the one or more customers, determine one or more personalized recommendations for the one or more customers by analyzing the personality profile, past purchase history of the one or more customers, and visit history of the one or more customers in the retail store using a machine learning model, and enable the at least one of customer to choose the one or more personalized recommendations.
    Type: Application
    Filed: March 23, 2022
    Publication date: June 6, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240181629
    Abstract: The embodiments herein relate to an artificially intelligent perceptive entertainment companion system (100) including a memory and a processor, provides companionship to a user (102) during at least one of an entertainment event or an entertainment activity. The processor is configured to capture a performance performed by the user (102) using a camera (108), determine one or more reactions of the user (102) in the captured performance using a reaction analyser, transmit the captured performance and the determined one or more reactions to a server (106) using a wireless transmitter, receive and process a perception of the user (102) from the server (106) to determine one or more activities, and initiate the one or more activities to the user (102) using one or more robot actuators based on the perception of the user (102) that provides companionship to the user (102) even when human companions are not available.
    Type: Application
    Filed: March 24, 2022
    Publication date: June 6, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240177524
    Abstract: A system and method for automatically classifying an activity of a user 102 during a proposal by an agent 104 to a user based on micro-expression and emotion of the user that provides a succeeding response to the agent 104 such that the proposal becomes successful using an artificial intelligence model is provided. The system includes a facial micro-expression unit 106, an expression analyser 110, the artificial intelligence model 112. The facial micro-expression unit 106 captures an interactive sequence of audio-visual information. The expression analyser 110 processes the interactive sequence of audio-visual information using the artificial intelligence model to determine an emotion and intensity of emotion of the user. The expression analyser 110 creates a record of a set of questions and responses. The expression analyzer 110 provides the succeeding response to the agent based on the created record using a wearable device 114.
    Type: Application
    Filed: March 23, 2022
    Publication date: May 30, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240169984
    Abstract: An embodiment herein provides a system and a method for generating a closed domain conversation with a user (102) in real-time based on user's interest. The system includes a robot (104) including a memory (110) that includes one or more instructions and a processor (112) that executes the one or more instructions. The processor (112) is configured to initiate a conversation with the user (102) which is a machine-initiated conversation, determine a flow of the conversation with the user (102) by analyzing replies of the user (102), generate the closed domain conversation by providing one or more contents related to at least one topic or category which is personalized based on a combination of one or more properties of the user's interest, and enable the user (102) to interact with the robot (104) using the closed domain conversation in real-time based on the user's interest.
    Type: Application
    Filed: March 23, 2022
    Publication date: May 23, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240165809
    Abstract: A system for automatic self-evaluation and testing one or more sensors and one or more peripherals in the robot 100. The AI system controls an end-to-end factory environment without human intervention. The AI system includes one or more smart rooms to test the one or more sensors and one or more peripherals in the robot. The one or more peripherals damaged in the robot 100 is removed and the new peripheral is placed and the new peripheral is tested by the AI system. The one or more smart rooms in the robot 100 evaluate the one or more peripherals individually to identify the fault in the individual peripherals.
    Type: Application
    Filed: March 25, 2022
    Publication date: May 23, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240169985
    Abstract: There is provided a method that includes (i) detecting, using sensor units (111) of a robot (108), at least one of environmental parameters, an environment event, a period of the environment event, an environment alert, a time of an event related to the user (102), personal events related to the user (102), user's relatives or friends or acquaintances, an outdoor environment location, an apparel of the user (102), audio events or visual events, or news, based on surrounding and proclivity of the user (102); and (ii) conversing or interacting with the user (102) based on a conversation topic related to detected environmental parameters, environment event, period of the environment event, environment alert, time of the event related to the user (102), personal events related to the user (102), user's relatives or friends or acquaintances, outdoor environment location, apparel of the user (102), audio events or visual events, or news.
    Type: Application
    Filed: March 24, 2022
    Publication date: May 23, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240144684
    Abstract: The embodiments herein relates to an artificially intelligent sports companion device 100 for accompanying an user 102 during events and activities. The artificially intelligent sports companion device 100 includes an event capture module 104, a processor 106, a knowledge database 108, and an output module 110. The processor 106 is configured to acquire at least one of the activity or the event along with the user 102 with the event capture module 104, semantic information from the user 102 and form opinions for conversation, and store audio/video feed of the activity or the event and the opinions in the knowledge database 108 to understand user preferences. The processor 106 of the artificially intelligent sports companion device 100 is configured to interact with the user 102 by understanding the user preferences through the output module 110 with an audio/video or through one or more expressions.
    Type: Application
    Filed: March 19, 2022
    Publication date: May 2, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Publication number: 20240046064
    Abstract: A multi-echelon self-learning system and a method for automatically generating a response for input query from data storage systems based on ranking of keywords using machine learning (ML) model is provided. The method includes obtaining the input query from an input robot through input peripheral associated with a user. The method includes validating the input query with child safety constraints to determine child safety of the input query. The method includes processing the child safe input query to determine top-ranked keywords. The method includes determining appropriate keyword from top-ranked keywords based on relevance of intention of the input query. The method includes enabling interactive conversation between the user and the input robot by determining the response for the input query based on appropriate keyword from the data storage systems and transmitting, response to output peripheral of the input robot.
    Type: Application
    Filed: March 18, 2022
    Publication date: February 8, 2024
    Inventors: Prashant Iyengar, Hardik Godara
  • Patent number: 11837227
    Abstract: A system for user initiated generic conversation with an artificially intelligent machine is provided. The system includes a conversational server (CS) that executes a conversational architecture across multiple devices, a communication network and a remote device. The conversational architecture includes one or more conversational nodes connected by edges which encapsulates flow and logic and transport data between the one or more conversational nodes. The conversational server (CS) receives input, at an input node, from a user through an input modality and performs computation logic that generates output data to pass to an output node.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: December 5, 2023
    Assignee: RN CHIDAKASHI TECHNOLOGIES PVT LTD
    Inventors: Sneh Vaswani, Prashant Iyengar, Chintan Raikar
  • Publication number: 20230147768
    Abstract: A system for behaviour mapping and categorization of objects and users in an 3D environment for creating and learning user behaviour map is provided. The system includes a robot 102, a network 104 and a central AI system 106. The robot 102 is embedded with an array of acoustic sensors 108 and visual sensors 110 for behaviour mapping and categorization the objects and users in the 3D environment and generates an auditory behaviour map and a visual behaviour map based on sensory inputs from the acoustic sensors 108 and visual sensors 110. The robot 102 transmits the acoustic source sensory input and the visual source sensory input to the central AI system 106 over the network 104 for generating a global behaviour map. The central AI system 106 tunes the global behaviour map to a specific user by tuning the detection and classification model to data obtained from a specific 3D environment that corresponds to the specific user.
    Type: Application
    Filed: February 21, 2021
    Publication date: May 11, 2023
    Inventors: Sneh Vaswani, Prashant Iyengar, Chintan Raikar, Ravi Vaidyanathan
  • Publication number: 20230121824
    Abstract: A system for user initiated generic conversation with an artificially intelligent machine is provided. The system includes a conversational server (CS) that executes a conversational architecture across multiple devices, a communication network and a remote device. The conversational architecture includes one or more conversational nodes connected by edges which encapsulates flow and logic and transport data between the one or more conversational nodes. The conversational server (CS) receives input, at an input node, from a user through an input modality and performs computation logic that generates output data to pass to an output node.
    Type: Application
    Filed: January 22, 2021
    Publication date: April 20, 2023
    Inventors: Sneh Vaswani, Prashant Iyengar, Chintan Raikar
  • Patent number: 11074491
    Abstract: A robotic companion device (10) configured for capturing and analysing affective information and semantic information and elicit response accordingly is disclosed herein. It comprises a processor (20) for managing emotional processing and responses configured for capturing and analysing semantic and affective information from sensory devices and communicating with users as well as external world using multitude of actuators and communication devices; a facial arrangement (11) configured for capturing visual information and displaying emotions; a locomotor arrangement (13) enabling movement of the robotic companion device; and microphone/speaker arrangement (15) configured for receiving auditory signal and emitting vocal response. The facial arrangement (11), the locomotor arrangement (13) and the microphone/speaker arrangement (15) are all in communication with the processor (20).
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: July 27, 2021
    Inventors: Prashant Iyengar, Sneh Rajkumar Vaswani, Chintan Raikar
  • Publication number: 20190283257
    Abstract: A robotic companion device (10) configured for capturing and analysing affective information and semantic information and elicit response accordingly is disclosed herein. It comprises a processor (20) for managing emotional processing and responses configured for capturing and analysing semantic and affective information from sensory devices and communicating with users as well as external world using multitude of actuators and communication devices; a facial arrangement (11) configured for capturing visual information and displaying emotions; a locomotor arrangement (13) enabling movement of the robotic companion device; and microphone/speaker arrangement (15) configured for receiving auditory signal and emitting vocal response. The facial arrangement (11), the locomotor arrangement (13) and the microphone/speaker arrangement (15) are all in communication with the processor (20).
    Type: Application
    Filed: May 30, 2017
    Publication date: September 19, 2019
    Inventors: PRASHANT IYENGAR, SNEH RAJKUMAR VASWANI, CHINTAN RAIKAR
  • Patent number: D930726
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: September 14, 2021
    Assignee: RN Chidakashi Technologies Private Limited
    Inventors: Sneh Vaswani, Prashant Iyengar, Chintan Raikar
  • Patent number: D1016115
    Type: Grant
    Filed: January 12, 2022
    Date of Patent: February 27, 2024
    Assignee: RN CHIDAKASHI TECHNOLOGIES PRIVATE LIMITED
    Inventors: Sneh Vaswani, Prashant Iyengar, Chintan Raikar