Patents by Inventor Pranav K. Mistry

Pranav K. Mistry has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11989810
    Abstract: A digital human interactive platform can determine a contextual response to a user input and generate a digital human. The digital human can convey the contextual response to the user in real time. The digital human can be configured to convey the contextual response with a predetermined behavior corresponding to the contextual response.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: May 21, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Simon Gibbs, Anil Unnikrishnan, Link Huang
  • Patent number: 11893669
    Abstract: A digital human development platform can enable a user to generate a digital human. The digital human development platform can receive user input specifying a dialogue for the digital human and one or more behaviors for the digital human, the one or more specified behaviors corresponding with one or more portions of the dialog on a common timeline. Scene data can be generated with the digital human development platform by merging the one or more behaviors with one or more portions of the dialogue based on times of the one or more behaviors and the one or more portions of the dialog on the common timeline.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: February 6, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Simon Gibbs, Anil Unnikrishnan, Link Huang
  • Patent number: 11544886
    Abstract: In one embodiment, a method includes, by one or more computing systems: receiving one or more non-video inputs, where the one or more non-video inputs include at least one of a text input, an audio input, or an expression input, accessing a K-NN graph including several sets of nodes, where each set of nodes corresponds to a particular semantic context out of several semantic contexts, determining one or more actions to be performed by a digital avatar based on the one or more identified semantic contexts, generating, in real-time in response to receiving the one or more non-video inputs and based on the determined one or more actions, a video output of the digital avatar including one or more human characteristics corresponding to the one or more identified semantic contexts, and sending, to a client device, instructions to present the video output of the digital avatar.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: January 3, 2023
    Inventors: Abhijit Z Bendale, Pranav K Mistry
  • Publication number: 20220244826
    Abstract: In one embodiment, a wearable device includes a device body including a touch-sensitive display and a circular mechanical ring rotatable around the touch-sensitive display, a band coupled to the device body, processors, and a memory coupled to the processors including instructions executable by the processors, the processors being operable when executing the instructions to present on the touch-sensitive display a first screen corresponding to a device currently paired with the wearable device, the first screen including an indication of an operational status of the currently paired device and, based on determining a type of the operational status and in response to a first transition event at the first screen, present a second screen with its content being determined based on the type of the operational status and whether an input associated with the first transition event is on the circular mechanical ring or at the touch-sensitive display.
    Type: Application
    Filed: April 14, 2022
    Publication date: August 4, 2022
    Inventors: Pranav K. Mistry, Lining Yao, John Snavely, Eva-Maria Offenberg, Link Chun Huang, Cathy Kim
  • Publication number: 20220222883
    Abstract: A digital human development platform can enable a user to generate a digital human. The digital human development platform can receive user input specifying a dialogue for the digital human and one or more behaviors for the digital human, the one or more specified behaviors corresponding with one or more portions of the dialog on a common timeline. Scene data can be generated with the digital human development platform by merging the one or more behaviors with one or more portions of the dialogue based on times of the one or more behaviors and the one or more portions of the dialog on the common timeline.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 14, 2022
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Simon Gibbs, Anil Unnikrishnan, Link Huang
  • Publication number: 20220222880
    Abstract: A digital human interactive platform can determine a contextual response to a user input and generate a digital human. The digital human can convey the contextual response to the user in real time. The digital human can be configured to convey the contextual response with a predetermined behavior corresponding to the contextual response.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 14, 2022
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Simon Gibbs, Anil Unnikrishnan, Link Huang
  • Publication number: 20220223140
    Abstract: A digital human display assembly can include a rectangular display panel enclosed within a frame, the rectangular display panel capable of visually rendering a digital human during an interactive dialogue between the digital human and a user. The digital human display assembly also can include a glass covering enclosed within the frame and extending over a front side of the rectangular display panel. A base can support the frame in an upright position. A light emitting diode (LED) arrangement can be positioned on one or more outer portions of the frame.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 14, 2022
    Inventors: Pranav K. Mistry, Jiawei Zhang, Seungsu Hong, Abhijit Z. Bendale
  • Publication number: 20220225486
    Abstract: Communications pertaining to a digital human can include communicating via a lighting system based on determining an aspect of a user based on one or more sensor-generated signals. A communicative lighting sequence can be determined based on the user attribute. The lighting sequence can correspond to a condition of a digital human and can be configured to communicate to the user the condition of the digital human. The light sequence can be generated with an LED array mounted on a digital human display assembly.
    Type: Application
    Filed: January 7, 2022
    Publication date: July 14, 2022
    Inventors: Pranav K. Mistry, Jiawei Zhang, Seungsu Hong, Abhijit Z. Bendale
  • Publication number: 20210201549
    Abstract: In one embodiment, a method includes, by one or more computing systems: receiving one or more non-video inputs, where the one or more non-video inputs include at least one of a text input, an audio input, or an expression input, accessing a K-NN graph including several sets of nodes, where each set of nodes corresponds to a particular semantic context out of several semantic contexts, determining one or more actions to be performed by a digital avatar based on the one or more identified semantic contexts, generating, in real-time in response to receiving the one or more non-video inputs and based on the determined one or more actions, a video output of the digital avatar including one or more human characteristics corresponding to the one or more identified semantic contexts, and sending, to a client device, instructions to present the video output of the digital avatar.
    Type: Application
    Filed: December 16, 2020
    Publication date: July 1, 2021
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry
  • Patent number: 10909371
    Abstract: A method includes retrieving, by a device, contextual information based on at least one of an image, the device, user context, or a combination thereof. At least one model is identified from multiple models based on the contextual information and at least one object recognized in an image based on at least one model. At least one icon is displayed at the device. The at least one icon being associated with at least one of an application, a service, or a combination thereof providing additional information.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: February 2, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Stanislaw Antol, Abhijit Bendale, Simon J. Gibbs, Won J. Jeon, Hyun Jae Kang, Jihee Kim, Bo Li, Anthony S. Liot, Lu Luo, Pranav K. Mistry, Zhihan Ying
  • Patent number: 10902262
    Abstract: One embodiment provides a method comprising classifying one or more objects present in an input comprising visual data by executing a first set of models associated with a domain on the input. Each model corresponds to an object category. Each model is trained to generate a visual classifier result relating to a corresponding object category in the input with an associated confidence value indicative of accuracy of the visual classifier result. The method further comprises aggregating a first set of visual classifier results based on confidence value associated with each visual classifier result of each model of the first set of models. At least one other model is selectable for execution on the input based on the aggregated first set of visual classifier results for additional classification of the objects. One or more visual classifier results are returned to an application running on an electronic device for display.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: January 26, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Stanislaw Antol, Abhijit Bendale, Simon J. Gibbs, Won J. Jeon, Hyun Jae Kang, Jihee Kim, Bo Li, Anthony S. Liot, Lu Luo, Pranav K. Mistry, Zhihan Ying
  • Publication number: 20180204061
    Abstract: One embodiment provides a method comprising classifying one or more objects present in an input comprising visual data by executing a first set of models associated with a domain on the input. Each model corresponds to an object category. Each model is trained to generate a visual classifier result relating to a corresponding object category in the input with an associated confidence value indicative of accuracy of the visual classifier result. The method further comprises aggregating a first set of visual classifier results based on confidence value associated with each visual classifier result of each model of the first set of models. At least one other model is selectable for execution on the input based on the aggregated first set of visual classifier results for additional classification of the objects. One or more visual classifier results are returned to an application running on an electronic device for display.
    Type: Application
    Filed: January 19, 2018
    Publication date: July 19, 2018
    Inventors: Stanislaw Antol, Abhijit Bendale, Simon J. Gibbs, Won J. Jeon, Hyun Jae Kang, Jihee Kim, Bo Li, Anthony S. Liot, Lu Luo, Pranav K. Mistry, Zhihan Ying
  • Publication number: 20180204059
    Abstract: A method includes retrieving, by a device, contextual information based on at least one of an image, the device, user context, or a combination thereof. At least one model is identified from multiple models based on the contextual information and at least one object recognized in an image based on at least one model. At least one icon is displayed at the device. The at least one icon being associated with at least one of an application, a service, or a combination thereof providing additional information.
    Type: Application
    Filed: December 13, 2017
    Publication date: July 19, 2018
    Inventors: Stanislaw Antol, Abhijit Bendale, Simon J. Gibbs, Won J. Jeon, Hyun Jae Kang, Jihee Kim, Bo Li, Anthony S. Liot, Lu Luo, Pranav K. Mistry, Zhihan Ying
  • Patent number: D965004
    Type: Grant
    Filed: January 11, 2021
    Date of Patent: September 27, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Anil Unnikrishnan
  • Patent number: D969842
    Type: Grant
    Filed: January 11, 2021
    Date of Patent: November 15, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon
  • Patent number: D989800
    Type: Grant
    Filed: August 23, 2022
    Date of Patent: June 20, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Anil Unnikrishnan
  • Patent number: D1002642
    Type: Grant
    Filed: January 11, 2021
    Date of Patent: October 24, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Pranav K. Mistry, Jiawei Zhang, Seungsu Hong, Abhijit Z. Bendale
  • Patent number: D1012114
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: January 23, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z. Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Anil Unnikrishnan
  • Patent number: D1017624
    Type: Grant
    Filed: May 10, 2023
    Date of Patent: March 12, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Abhijit Z Bendale, Pranav K. Mistry, Bola Yoo, Kijeong Kwon, Anil Unnikrishnan
  • Patent number: D1020788
    Type: Grant
    Filed: January 11, 2021
    Date of Patent: April 2, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Pranav K. Mistry, Jiawei Zhang, Seungsu Hong, Abhijit Z. Bendale, Bola Yoo, Kijeong Kwon