Patents by Inventor Nicholas Ferianc Kamuda

Nicholas Ferianc Kamuda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240037879
    Abstract: In some implementations, the disclosed systems and methods can be blocked by an obstacle or object, such as wall(s), door(s), or other objects such that a user cannot view the area in a real-world environment. In some implementations, the disclosed systems and methods can interface with smart lights (and/or other smart devices controlling ambient lighting) to know where lights are and how much light they are putting out. In some implementations, the disclosed systems and methods can be animated via user tracking. Some user systems may lack video tracking and/or comprise inconsistent video tracking. In some implementations, the disclosed systems and methods can display content for a screen in a pass-through visualization.
    Type: Application
    Filed: September 6, 2023
    Publication date: February 1, 2024
    Inventors: Joseph GARDNER, Alexander FAABORG, Nicholas Ferianc KAMUDA
  • Patent number: 11755122
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: September 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
  • Patent number: 11703994
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: July 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Publication number: 20220345537
    Abstract: In one embodiment, an AR/VR system includes a social-networking application installed on the AR/VR system, which allows a user to access on online social network, including communicating with the user's social connections and interacting with content objects on the online social network. The AR/VR system also includes an AR/VR application, which allows the user to interact with an AR/VR platform by providing user input to the AR/VR application via various modalities. Based on the user input, the AR/VR platform generates responses and sends the generated responses to the AR/VR application, which then presents the responses to the user at the AR/VR system via various modalities.
    Type: Application
    Filed: March 3, 2022
    Publication date: October 27, 2022
    Inventors: Safiya Samms, Ioana Adriana Vlad, Marcus Tanner, Pieter De Baets, Nicole Lundblad, Gregory Francis Mazurek, Nicholas Ferianc Kamuda, Kimberly Arnette, Azam Jiva, Anshul Raizada, Chandra Shekar Chetty, Bhabani Panda
  • Patent number: 11461955
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: October 4, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Publication number: 20220283646
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Michael Harley NOTTER, Jenny KAM, Sheng Kai TANG, Kenneth Mitchell JAKUBZAK, Adam Edwin BEHRINGER, Amy Mun HONG, Joshua Kyle NEFF, Sophie STELLMACH, Mathew J. LAMB, Nicholas Ferianc KAMUDA
  • Publication number: 20220253199
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Patent number: 11340707
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
  • Patent number: 11320957
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Patent number: 11294472
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: April 5, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
  • Patent number: 11277652
    Abstract: A system finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: March 15, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cesare John Saretto, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda, Henry Hooper Somuah, Matthew John McCloskey, Douglas C. Hebenthal, Kathleen P. Mulcahy
  • Publication number: 20210383594
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: August 23, 2021
    Publication date: December 9, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20210373672
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Michael Harley NOTTER, Jenny KAM, Sheng Kai TANG, Kenneth Mitchell JAKUBZAK, Adam Edwin BEHRINGER, Amy Mun HONG, Joshua Kyle NEFF, Sophie STELLMACH, Mathew J. LAMB, Nicholas Ferianc KAMUDA
  • Patent number: 11107265
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: August 31, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Patent number: 10955665
    Abstract: A see through head mounted display apparatus includes code performing a method of choosing and optimal viewing location and perspective for shared-view virtual objects rendered for multiple users in a common environment. Multiple objects and multiple users are taken into account in determining the optimal, common viewing location. The technology allows each user to have a common view if the relative position of the object in the environment.
    Type: Grant
    Filed: June 18, 2013
    Date of Patent: March 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tom G. Salter, Ben J. Sugden, Daniel Deptford, Robert L. Crocco, Jr., Brian E. Keane, Laura K. Massey, Alex Aben-Athar Kipman, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda
  • Publication number: 20200336778
    Abstract: A system finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
    Type: Application
    Filed: July 6, 2020
    Publication date: October 22, 2020
    Inventors: Cesare John SARETTO, Peter Tobias KINNEBREW, Nicholas Ferianc KAMUDA, Henry Hooper SOMUAH, Matthew John McCLOSKEY, Douglas C. HEBENTHAL, Kathleen P. MULCAHY
  • Patent number: 10735796
    Abstract: A system finds and aggregates the most relevant and current information about the people and things that a user cares about. The information gathering is based on current context (e.g., where the user is, what the user is doing, what the user is saying/typing, etc.). The result of the context based information gathering is presented ubiquitously on user interfaces of any of the various physical devices operated by the user.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: August 4, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Cesare John Saretto, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda, Henry Hooper Somuah, Matthew John McCloskey, Douglas C. Hebenthal, Kathleen P. Mulcahy
  • Publication number: 20200226814
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20200225830
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: March 25, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Publication number: 20200225758
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Application
    Filed: March 26, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE