Patents by Inventor Soumya BATRA

Soumya BATRA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240038220
    Abstract: A computer-implemented technique is described herein for expediting a user's interaction with a digital assistant. In one implementation, the technique involves receiving a system prompt generated by a digital assistant in response to an input command provided by a user via an input device. The technique then generates a predicted response based on linguistic content of the system prompt, together with contextual features pertaining to a circumstance in which the system prompt was issued. The predicted response corresponds to a prediction of how the user will respond to the system prompt. The technique then selects one or more dialogue actions from a plurality of dialogue actions, based on a confidence value associated with the predicted response. The technique expedites the user's interaction with the digital assistant by reducing the number of system prompts that the user is asked to respond to.
    Type: Application
    Filed: October 9, 2023
    Publication date: February 1, 2024
    Inventors: Vipul AGARWAL, Rahul Kumar JHA, Soumya BATRA, Karthik TANGIRALA, Mohammad MAKARECHIAN, Imed ZITOUNI
  • Patent number: 11823661
    Abstract: A computer-implemented technique is described herein for expediting a user's interaction with a digital assistant. In one implementation, the technique involves receiving a system prompt generated by a digital assistant in response to an input command provided by a user via an input device. The technique then generates a predicted response based on linguistic content of the system prompt, together with contextual features pertaining to a circumstance in which the system prompt was issued. The predicted response corresponds to a prediction of how the user will respond to the system prompt. The technique then selects one or more dialogue actions from a plurality of dialogue actions, based on a confidence value associated with the predicted response. The technique expedites the user's interaction with the digital assistant by reducing the number of system prompts that the user is asked to respond to.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: November 21, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vipul Agarwal, Rahul Kumar Jha, Soumya Batra, Karthik Tangirala, Mohammad Makarechian, Imed Zitouni
  • Patent number: 11748071
    Abstract: Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.
    Type: Grant
    Filed: December 14, 2022
    Date of Patent: September 5, 2023
    Inventors: Soumya Batra, Hany Mohamed Salah Eldeen Mohamed Khalil, Imed Zitouni
  • Publication number: 20230110655
    Abstract: Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.
    Type: Application
    Filed: December 14, 2022
    Publication date: April 13, 2023
    Inventors: Hany Mohamed SalahEldeen Mohamed KHALIL, Imed Zitouni, Soumya Batra
  • Patent number: 11537365
    Abstract: Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: December 27, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Soumya Batra, Hany Mohamed Salaheldeen Mohamed Khalil, Imed Zitouni
  • Publication number: 20210082403
    Abstract: A computer-implemented technique is described herein for expediting a user's interaction with a digital assistant. In one implementation, the technique involves receiving a system prompt generated by a digital assistant in response to an input command provided by a user via an input device. The technique then generates a predicted response based on linguistic content of the system prompt, together with contextual features pertaining to a circumstance in which the system prompt was issued. The predicted response corresponds to a prediction of how the user will respond to the system prompt. The technique then selects one or more dialogue actions from a plurality of dialogue actions, based on a confidence value associated with the predicted response. The technique expedites the user's interaction with the digital assistant by reducing the number of system prompts that the user is asked to respond to.
    Type: Application
    Filed: November 24, 2020
    Publication date: March 18, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Vipul AGARWAL, Rahul Kumar JHA, Soumya BATRA, Karthik TANGIRALA, Mohammad MAKARECHIAN, Imed ZITOUNI
  • Patent number: 10878805
    Abstract: A computer-implemented technique is described herein for expediting a user's interaction with a digital assistant. In one implementation, the technique involves receiving a system prompt generated by a digital assistant in response to an input command provided by a user via an input device. The technique then generates a predicted response based on linguistic content of the system prompt, together with contextual features pertaining to a circumstance in which the system prompt was issued. The predicted response corresponds to a prediction of how the user will respond to the system prompt. The technique then selects one or more dialogue actions from a plurality of dialogue actions, based on a confidence value associated with the predicted response. The technique expedites the user's interaction with the digital assistant by reducing the number of system prompts that the user is asked to respond to.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: December 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Vipul Agarwal, Rahul Kumar Jha, Soumya Batra, Karthik Tangirala, Mohammad Makarechian, Imed Zitouni
  • Publication number: 20200310765
    Abstract: Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.
    Type: Application
    Filed: June 16, 2020
    Publication date: October 1, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Soumya BATRA, Hany Mohamed SalahEldeen Mohamed KHALIL, Imed ZITOUNI
  • Patent number: 10713019
    Abstract: Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: July 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Soumya Batra, Hany Mohamed SalahEldeen Mohamed Khalil, Imed Zitouni
  • Publication number: 20200184956
    Abstract: A computer-implemented technique is described herein for expediting a user's interaction with a digital assistant. In one implementation, the technique involves receiving a system prompt generated by a digital assistant in response to an input command provided by a user via an input device. The technique then generates a predicted response based on linguistic content of the system prompt, together with contextual features pertaining to a circumstance in which the system prompt was issued. The predicted response corresponds to a prediction of how the user will respond to the system prompt. The technique then selects one or more dialogue actions from a plurality of dialogue actions, based on a confidence value associated with the predicted response. The technique expedites the user's interaction with the digital assistant by reducing the number of system prompts that the user is asked to respond to.
    Type: Application
    Filed: December 6, 2018
    Publication date: June 11, 2020
    Inventors: Vipul AGARWAL, Rahul Kumar JHA, Soumya BATRA, Karthik TANGIRALA, Mohammad MAKARECHIAN, Imed ZITOUNI
  • Publication number: 20190332361
    Abstract: Developer and runtime environments supporting multi-modal input for computing systems are disclosed. The developer environment includes a gesture library of human body gestures (e.g., hand gestures) that a previously-trained, system-level gesture recognition machine is configured to recognize. The developer environment further includes a user interface for linking a gesture of the gesture library with a semantic descriptor that is assigned to a function of the application program. The application program is executable to implement the function responsive to receiving an indication of the gesture recognized by the gesture recognition machine within image data captured by a camera. The semantic descriptor may be additionally linked to a different input modality than the gesture, such as a natural language input.
    Type: Application
    Filed: April 26, 2018
    Publication date: October 31, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Soumya BATRA, Hany Mohamed SalahEldeen Mohamed KHALIL, Imed ZITOUNI