Patents by Inventor Payod PANDA
Payod PANDA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250078379Abstract: Systems and methods for representing two-dimensional representations as three-dimensional avatars are provided herein. In some examples, one or more input video streams are received. A first subject, within the one or more input video streams, is identified. Based on the one or more input video streams, a first view of the first subject is identified. Based on the one or more input video streams, a second view of the first subject is identified. The first subject is segmented into a plurality of planar object. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first and second views of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide perspective of the first subject to one or more viewers.Type: ApplicationFiled: November 20, 2024Publication date: March 6, 2025Applicant: Microsoft Technology Licensing, LLCInventors: Mar GONZALEZ FRANCO, Payod PANDA, Andrew D. WILSON, Kori M. INKPEN, Eyal OFEK, William Arthur Stewart BUXTON
-
Patent number: 12175581Abstract: Systems and methods for representing two-dimensional representations as three-dimensional avatars are provided herein. In some examples, one or more input video streams are received. A first subject, within the one or more input video streams, is identified. Based on the one or more input video streams, a first view of the first subject is identified. Based on the one or more input video streams, a second view of the first subject is identified. The first subject is segmented into a plurality of planar object. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first and second views of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide perspective of the first subject to one or more viewers.Type: GrantFiled: June 30, 2022Date of Patent: December 24, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Mar Gonzalez Franco, Payod Panda, Andrew D. Wilson, Kori M. Inkpen, Eyal Ofek, William Arthur Stewart Buxton
-
Publication number: 20240203075Abstract: A computer-implemented method is described which comprises generating a representation of a digital space and a representation of the physical space using an audiovisual feed received from a camera proximate to a display located in the physical space. The representation of the digital space is generated using user information identifying a remote user associated with the display and presence information relating to the remote user and the digital representation comprises an avatar of the remote user. The representation of the digital space is output to the display located in the physical space and the representation of the physical space it output to a computing device associated with the remote user. The method further comprises dynamically updating the representation of the digital space and/or physical space in response to changes in the user information and presence information.Type: ApplicationFiled: December 14, 2022Publication date: June 20, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Edward Sean Lloyd RINTEL, Payod PANDA, Lev TANKELEVITCH, Abigail Jane SELLEN, Kori Marie INKPEN, John C. TANG, Sasa JUNUZOVIC, Andrew D. WILSON, Bo KANG, Andriana BOUDOURAKI, William Arthur Stewart BUXTON, Ozumcan DEMIR CALISKAN, Kunal GUPTA
-
Publication number: 20240185534Abstract: A computer-implemented method of generating a mixed reality workflow is described. The method comprises identifying a series of tasks and generating an input task-to-object-mapping by analyzing data that defines a process performed by a first user interacting with objects in a first location. The input task-to-object-mapping that maps each task from the series of tasks to an object used in the respective task. A task-specific non-spatial characteristic of each object in the input task-to-object-mapping is determined and used to map each object in the input task-to-object-mapping to a candidate object identified at a second location to generate an output task-to-object-mapping. The series of tasks, location data defining a position of each candidate object in the second location and output task-to-object-mapping are used to generate a mapped workflow which is then output to a device in the second location.Type: ApplicationFiled: December 2, 2022Publication date: June 6, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Edward Sean Lloyd RINTEL, Prashant VAIDYANATHAN, Paul Kerr GRANT, Neil DALCHAU, Eyal OFEK, Payod PANDA
-
Publication number: 20240005579Abstract: Systems and methods for representing two-dimensional representations as three-dimensional avatars are provided herein. In some examples, one or more input video streams are received. A first subject, within the one or more input video streams, is identified. Based on the one or more input video streams, a first view of the first subject is identified. Based on the one or more input video streams, a second view of the first subject is identified. The first subject is segmented into a plurality of planar object. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first and second views of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide perspective of the first subject to one or more viewers.Type: ApplicationFiled: June 30, 2022Publication date: January 4, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Mar GONZALEZ FRANCO, Payod PANDA, Andrew D. WILSON, Kori M. INKPEN, Eyal OFEK, William Arthur Stewart BUXTON
-
Publication number: 20230421722Abstract: Aspects of the present disclosure relate to headset virtual presence techniques. For example, a participant of a communication session may not have an associated video feed, for example as a result of a user preference to disable video communication or a lack of appropriate hardware. Accordingly, a virtual presence may be generated for such a non-video participant, such that the non-video participant may be represented within the communication session similar to video participants. The virtual presence may be controllable using a headset device, for example such that movements identified by the headset device cause the virtual presence to move. In some instances, user input may be received to control emotions conveyed by the virtual presence, for example specifying an emotion type and/or intensity.Type: ApplicationFiled: September 7, 2023Publication date: December 28, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA
-
Patent number: 11792364Abstract: Aspects of the present disclosure relate to headset virtual presence techniques. For example, a participant of a communication session may not have an associated video feed, for example as a result of a user preference to disable video communication or a lack of appropriate hardware. Accordingly, a virtual presence may be generated for such a non-video participant, such that the non-video participant may be represented within the communication session similar to video participants. The virtual presence may be controllable using a headset device, for example such that movements identified by the headset device cause the virtual presence to move. In some instances, user input may be received to control emotions conveyed by the virtual presence, for example specifying an emotion type and/or intensity.Type: GrantFiled: May 28, 2021Date of Patent: October 17, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Kenneth P. Hinckley, Michel Pahud, Mar Gonzalez Franco, Edward Sean Lloyd Rintel, Eyal Ofek, Jaron Zepel Lanier, Molly Jane Nicholas, Payod Panda
-
Publication number: 20230266830Abstract: Aspects of the present disclosure relate to semantic user input for a computing device. In examples, user input is identified and processed to identify and automatically perform an associated semantic action. The semantic action may be determined based at least in part on an environmental context associated with the user input. Thus, an action determined for a given user input may change according to the environmental context in which the input was received. For example, an association between user input, an environmental context, and an action may be used to affect the behavior of a computing device as a result of identifying the user input in a scenario that has the environmental context. Such associations may be dynamically determined as a result of user interactions associated with manually provided input, for example to create, update, and/or remove semantic actions associated with a variety of user inputs.Type: ApplicationFiled: February 22, 2022Publication date: August 24, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Eyal OFEK, Michel PAHUD, Edward Sean Lloyd RINTEL, Mar Gonzalez FRANCO, Payod PANDA
-
Publication number: 20230259322Abstract: Aspects of the present disclosure relate to computing device headset input. In examples, sensor data from one or more sensors of a headset device are processed to identify implicit and/or explicit user input. A context may be determined for the user input, which may be used to process the identified input and generate an action that affects the behavior of a computing device accordingly. As a result, the headset device is usable to control one or more computing devices. As compared to other wearable devices, headset devices may be more prevalent and may therefore enable more convenient and more intuitive user input beyond merely providing audio output.Type: ApplicationFiled: April 26, 2023Publication date: August 17, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA
-
Patent number: 11669294Abstract: Aspects of the present disclosure relate to computing device headset input. In examples, sensor data from one or more sensors of a headset device are processed to identify implicit and/or explicit user input. A context may be determined for the user input, which may be used to process the identified input and generate an action that affects the behavior of a computing device accordingly. As a result, the headset device is usable to control one or more computing devices. As compared to other wearable devices, headset devices may be more prevalent and may therefore enable more convenient and more intuitive user input beyond merely providing audio output.Type: GrantFiled: May 28, 2021Date of Patent: June 6, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Kenneth P. Hinckley, Michel Pahud, Mar Gonzalez Franco, Edward Sean Lloyd Rintel, Eyal Ofek, Jaron Zepel Lanier, Molly Jane Nicholas, Payod Panda
-
Publication number: 20220385855Abstract: Aspects of the present disclosure relate to headset virtual presence techniques. For example, a participant of a communication session may not have an associated video feed, for example as a result of a user preference to disable video communication or a lack of appropriate hardware. Accordingly, a virtual presence may be generated for such a non-video participant, such that the non-video participant may be represented within the communication session similar to video participants. The virtual presence may be controllable using a headset device, for example such that movements identified by the headset device cause the virtual presence to move. In some instances, user input may be received to control emotions conveyed by the virtual presence, for example specifying an emotion type and/or intensity.Type: ApplicationFiled: May 28, 2021Publication date: December 1, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA
-
Publication number: 20220382506Abstract: Aspects of the present disclosure relate to computing device headset input. In examples, sensor data from one or more sensors of a headset device are processed to identify implicit and/or explicit user input. A context may be determined for the user input, which may be used to process the identified input and generate an action that affects the behavior of a computing device accordingly. As a result, the headset device is usable to control one or more computing devices. As compared to other wearable devices, headset devices may be more prevalent and may therefore enable more convenient and more intuitive user input beyond merely providing audio output.Type: ApplicationFiled: May 28, 2021Publication date: December 1, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth P. HINCKLEY, Michel PAHUD, Mar Gonzalez FRANCO, Edward Sean Lloyd RINTEL, Eyal OFEK, Jaron Zepel LANIER, Molly Jane NICHOLAS, Payod PANDA