Patents by Inventor Julia Schwarz

Julia Schwarz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230418390
    Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
    Type: Application
    Filed: September 8, 2023
    Publication date: December 28, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Bugra TEKIN, Sophie STELLMACH, Erian VAZQUEZ, Casey Leon MEEKHOF, Fabian GOBEL
  • Patent number: 11768544
    Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: September 26, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Bugra Tekin, Sophie Stellmach, Erian Vazquez, Casey Leon Meekhof, Fabian Gobel
  • Patent number: 11755122
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Grant
    Filed: May 23, 2022
    Date of Patent: September 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
  • Publication number: 20230244316
    Abstract: A method for evaluating gesture input comprises receiving input data for sequential data frames, including hand tracking data for hands of a user. A first neural network is trained to recognize features indicative of subsequent gesture interactions and configured to evaluate input data for a sequence of data frames and to output an indication of a likelihood of the user performing gesture interactions during a predetermined window of data frames. A second neural network is trained to recognize features indicative of whether the user is currently performing one or more gesture interactions and configured to adjust parameters for gesture interaction recognition during the predetermined window based on the indicated likelihood. The second neural network evaluates the predetermined window for performed gesture interactions based on the adjusted parameters, and outputs a signal as to whether the user is performing one or more gesture interactions during the predetermined window.
    Type: Application
    Filed: February 1, 2022
    Publication date: August 3, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Bugra TEKIN, Sophie STELLMACH, Erian VAZQUEZ, Casey Leon MEEKHOF, Fabian GOBEL
  • Patent number: 11703994
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: July 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
  • Patent number: 11656689
    Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, a corresponding first microgesture target is designated for selection. Selection of the first microgesture target is enabled based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: May 23, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Noe Moreno Barragan, Michael Harley Notter, Sheng Kai Tang, Joshua Kyle Neff
  • Patent number: 11656762
    Abstract: The discussion relates to virtual keyboard engagement. One example can define key volumes relating to keys of a virtual keyboard and detect finger movement of a user through individual key volumes. The example can detect parameter changes associated with detected finger movement through individual key volumes and build potential key sequences from detected parameter changes.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: May 23, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Christopher M. Becker, Nazeeh A. Eldirghami, Kevin W. Barnes, Julia Schwarz, Eric Carter
  • Patent number: 11619983
    Abstract: A method and apparatus that resolve near touch ambiguities in a touch screen includes detecting a touch screen touch event and detecting a vibro-acoustic event. These events generate signals received respectively by two different sensors and/or processes. When the two events occur within a pre-defined window of time, they may be considered to be part of the same touch event and may signify a true touch.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: April 4, 2023
    Assignee: QEEXO, CO.
    Inventors: Chris Harrison, Julia Schwarz, Robert Bo Xiao
  • Patent number: 11620000
    Abstract: The techniques disclosed herein provide systems that can control the invocation of precision input mode. A system can initially utilize a first input device, such as a head-mounted display device monitoring the eye gaze direction of a user to control the location of an input target. When one or more predetermined input gestures are detected, the system can then invoke a precision mode that transitions the control of the input target from the first input device to a second input device. The second device can include another input device utilizing different input modalities, such as a sensor detecting one or more hand gestures of the user. The predetermined input gestures can include a fixation input gesture, voice commands, or other gestures that may include the use of a user's hands or head. By controlling the invocation of precision input mode using specific gestures, a system can mitigate device coordination issues.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: April 4, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Sophie Stellmach, Julia Schwarz, Erian Vazquez, Kristian Jose Davila, Thomas Matthew Gable, Adam Behringer
  • Patent number: 11567633
    Abstract: A computer-implemented method for determining focus of a user is provided. User input is received. An intention image of a scene including a plurality of interactive objects is generated. The intention image includes pixels encoded with intention values determined based on the user input. An intention value indicates a likelihood that the user intends to focus on the pixel. An intention score is determined for each interactive object based on the intention values of pixels that correspond to the interactive object. An interactive object of the plurality of interactive objects is determined to be a focused object that has the user's focus based on the intention scores of the plurality of interactive objects.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: January 31, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Andrew D. Wilson, Sophie Stellmach, Erian Vazquez, Kristian Jose Davila, Adam Edwin Behringer, Jonathan Palmer, Jason Michael Ray, Mathew Julian Lamb
  • Patent number: 11461955
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: October 4, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
  • Publication number: 20220300071
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. A system presents a mixed-reality environment to a user with a mixed-reality display device, displays a control ray as a hologram of a line extending away from the user control within the mixed-reality environment, and obtains a control ray activation variable associated with a user control. The control ray activation variable includes a velocity or acceleration of the user control. After displaying the control ray within the mixed-reality environment, and in response to determining that, the control ray activation variable exceeds a predetermined threshold, the system selectively disables display of the control ray within the mixed-reality environment.
    Type: Application
    Filed: June 8, 2022
    Publication date: September 22, 2022
    Inventors: Julia SCHWARZ, Sheng Kai TANG, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Sophie STELLMACH
  • Publication number: 20220283646
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Application
    Filed: May 23, 2022
    Publication date: September 8, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Michael Harley NOTTER, Jenny KAM, Sheng Kai TANG, Kenneth Mitchell JAKUBZAK, Adam Edwin BEHRINGER, Amy Mun HONG, Joshua Kyle NEFF, Sophie STELLMACH, Mathew J. LAMB, Nicholas Ferianc KAMUDA
  • Patent number: 11423585
    Abstract: Examples that relate to virtual controls in a mixed reality experience are described. One example provides a method comprising, via a mixed reality display device, displaying mixed reality content including a representation of a virtual control, and receiving sensor data indicating motion of a user digit. The method further comprises, based at least in part on the sensor data, determining a velocity of the user digit, and responsive to determining that the velocity of the user digit relative to a surface corresponding to the virtual control satisfies a velocity-based selection condition, triggering the virtual control.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: August 23, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Casey Leon Meekhof, Kyle Nicholas San, Julia Schwarz
  • Publication number: 20220253182
    Abstract: A computer-implemented method for determining focus of a user is provided. User input is received. An intention image of a scene including a plurality of interactive objects is generated. The intention image includes pixels encoded with intention values determined based on the user input. An intention value indicates a likelihood that the user intends to focus on the pixel. An intention score is determined for each interactive object based on the intention values of pixels that correspond to the interactive object. An interactive object of the plurality of interactive objects is determined to be a focused object that has the user's focus based on the intention scores of the plurality of interactive objects.
    Type: Application
    Filed: February 8, 2021
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Andrew D. WILSON, Sophie STELLMACH, Erian VAZQUEZ, Kristian Jose DAVILA, Adam Edwin BEHRINGER, Jonathan PALMER, Jason Michael RAY, Mathew Julian LAMB
  • Publication number: 20220253199
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Patent number: 11397463
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
    Type: Grant
    Filed: March 8, 2019
    Date of Patent: July 26, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
  • Publication number: 20220171469
    Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, a corresponding first microgesture target is designated for selection. Selection of the first microgesture target is enabled based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.
    Type: Application
    Filed: January 13, 2022
    Publication date: June 2, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Noe Moreno BARRAGAN, Michael Harley NOTTER, Sheng Kai TANG, Joshua Kyle NEFF
  • Publication number: 20220172405
    Abstract: Examples that relate to virtual controls in a mixed reality experience are described. One example provides a method comprising, via a mixed reality display device, displaying mixed reality content including a representation of a virtual control, and receiving sensor data indicating motion of a user digit. The method further comprises, based at least in part on the sensor data, determining a velocity of the user digit, and responsive to determining that the velocity of the user digit relative to a surface corresponding to the virtual control satisfies a velocity-based selection condition, triggering the virtual control.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Casey Leon MEEKHOF, Kyle Nicholas SAN, Julia SCHWARZ
  • Patent number: 11340707
    Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda