Patents by Inventor Payod PANDA

Payod PANDA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240005579
    Abstract: Systems and methods for representing two-dimensional representations as three-dimensional avatars are provided herein. In some examples, one or more input video streams are received. A first subject, within the one or more input video streams, is identified. Based on the one or more input video streams, a first view of the first subject is identified. Based on the one or more input video streams, a second view of the first subject is identified. The first subject is segmented into a plurality of planar object. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first and second views of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide perspective of the first subject to one or more viewers.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 4, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mar GONZALEZ FRANCO, Payod PANDA, Andrew D. WILSON, Kori M. INKPEN, Eyal OFEK, William Arthur Stewart BUXTON
  • Publication number: 20230421722
    Abstract: Aspects of the present disclosure relate to headset virtual presence techniques. For example, a participant of a communication session may not have an associated video feed, for example as a result of a user preference to disable video communication or a lack of appropriate hardware. Accordingly, a virtual presence may be generated for such a non-video participant, such that the non-video participant may be represented within the communication session similar to video participants. The virtual presence may be controllable using a headset device, for example such that movements identified by the headset device cause the virtual presence to move. In some instances, user input may be received to control emotions conveyed by the virtual presence, for example specifying an emotion type and/or intensity.
    Type: Application
    Filed: September 7, 2023
    Publication date: December 28, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA
  • Patent number: 11792364
    Abstract: Aspects of the present disclosure relate to headset virtual presence techniques. For example, a participant of a communication session may not have an associated video feed, for example as a result of a user preference to disable video communication or a lack of appropriate hardware. Accordingly, a virtual presence may be generated for such a non-video participant, such that the non-video participant may be represented within the communication session similar to video participants. The virtual presence may be controllable using a headset device, for example such that movements identified by the headset device cause the virtual presence to move. In some instances, user input may be received to control emotions conveyed by the virtual presence, for example specifying an emotion type and/or intensity.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: October 17, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. Hinckley, Michel Pahud, Mar Gonzalez Franco, Edward Sean Lloyd Rintel, Eyal Ofek, Jaron Zepel Lanier, Molly Jane Nicholas, Payod Panda
  • Publication number: 20230266830
    Abstract: Aspects of the present disclosure relate to semantic user input for a computing device. In examples, user input is identified and processed to identify and automatically perform an associated semantic action. The semantic action may be determined based at least in part on an environmental context associated with the user input. Thus, an action determined for a given user input may change according to the environmental context in which the input was received. For example, an association between user input, an environmental context, and an action may be used to affect the behavior of a computing device as a result of identifying the user input in a scenario that has the environmental context. Such associations may be dynamically determined as a result of user interactions associated with manually provided input, for example to create, update, and/or remove semantic actions associated with a variety of user inputs.
    Type: Application
    Filed: February 22, 2022
    Publication date: August 24, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Eyal OFEK, Michel PAHUD, Edward Sean Lloyd RINTEL, Mar Gonzalez FRANCO, Payod PANDA
  • Publication number: 20230259322
    Abstract: Aspects of the present disclosure relate to computing device headset input. In examples, sensor data from one or more sensors of a headset device are processed to identify implicit and/or explicit user input. A context may be determined for the user input, which may be used to process the identified input and generate an action that affects the behavior of a computing device accordingly. As a result, the headset device is usable to control one or more computing devices. As compared to other wearable devices, headset devices may be more prevalent and may therefore enable more convenient and more intuitive user input beyond merely providing audio output.
    Type: Application
    Filed: April 26, 2023
    Publication date: August 17, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA
  • Patent number: 11669294
    Abstract: Aspects of the present disclosure relate to computing device headset input. In examples, sensor data from one or more sensors of a headset device are processed to identify implicit and/or explicit user input. A context may be determined for the user input, which may be used to process the identified input and generate an action that affects the behavior of a computing device accordingly. As a result, the headset device is usable to control one or more computing devices. As compared to other wearable devices, headset devices may be more prevalent and may therefore enable more convenient and more intuitive user input beyond merely providing audio output.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: June 6, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. Hinckley, Michel Pahud, Mar Gonzalez Franco, Edward Sean Lloyd Rintel, Eyal Ofek, Jaron Zepel Lanier, Molly Jane Nicholas, Payod Panda
  • Publication number: 20220385855
    Abstract: Aspects of the present disclosure relate to headset virtual presence techniques. For example, a participant of a communication session may not have an associated video feed, for example as a result of a user preference to disable video communication or a lack of appropriate hardware. Accordingly, a virtual presence may be generated for such a non-video participant, such that the non-video participant may be represented within the communication session similar to video participants. The virtual presence may be controllable using a headset device, for example such that movements identified by the headset device cause the virtual presence to move. In some instances, user input may be received to control emotions conveyed by the virtual presence, for example specifying an emotion type and/or intensity.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA
  • Publication number: 20220382506
    Abstract: Aspects of the present disclosure relate to computing device headset input. In examples, sensor data from one or more sensors of a headset device are processed to identify implicit and/or explicit user input. A context may be determined for the user input, which may be used to process the identified input and generate an action that affects the behavior of a computing device accordingly. As a result, the headset device is usable to control one or more computing devices. As compared to other wearable devices, headset devices may be more prevalent and may therefore enable more convenient and more intuitive user input beyond merely providing audio output.
    Type: Application
    Filed: May 28, 2021
    Publication date: December 1, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA