Patents by Inventor Julia Schwarz
Julia Schwarz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220171469Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, a corresponding first microgesture target is designated for selection. Selection of the first microgesture target is enabled based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.Type: ApplicationFiled: January 13, 2022Publication date: June 2, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Julia SCHWARZ, Noe Moreno BARRAGAN, Michael Harley NOTTER, Sheng Kai TANG, Joshua Kyle NEFF
-
Patent number: 11340707Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.Type: GrantFiled: May 29, 2020Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Michael Harley Notter, Jenny Kam, Sheng Kai Tang, Kenneth Mitchell Jakubzak, Adam Edwin Behringer, Amy Mun Hong, Joshua Kyle Neff, Sophie Stellmach, Mathew J. Lamb, Nicholas Ferianc Kamuda
-
Patent number: 11320911Abstract: Systems and methods are provided for detecting user-object interaction in mixed-reality environments. A mixed-reality system detects a controller gesture with an associated controller orientation in the mixed-reality environment. The mixed-reality system then determines an interaction region for the controller gesture and identifies one or more virtual objects within the interaction region. The virtual objects each have an associated orientation affinity. Subsequently, the mixed-reality system determines an orientation similarity score between the controller orientation and the orientation affinity for each virtual object within the interaction region. In response to determining that at least one orientation similarity score exceeds a predetermined threshold, the mixed-reality system executes an interaction between the controller and the virtual object that has the greatest orientation similarity score.Type: GrantFiled: March 8, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Jason Michael Ray, Casey Leon Meekhof
-
Patent number: 11320957Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: March 25, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Patent number: 11294472Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: GrantFiled: March 26, 2019Date of Patent: April 5, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
-
Patent number: 11262864Abstract: A system for classifying touch events includes a touch screen configured to display an interactive element, one or more acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the one or more acoustic sensors and to save acoustic signals sensed by the one or more acoustic sensors, wherein the touch event detector is further configured to detect touch events in which the interactive element is touched by a first or a second finger part of a user, and wherein the touch events result in generating the acoustic signals, and an acoustic classifier configured to classify the acoustic signals.Type: GrantFiled: December 5, 2017Date of Patent: March 1, 2022Assignee: QEEXO, CO.Inventors: Christopher Harrison, Julia Schwarz, Robert Xiao
-
Patent number: 11249556Abstract: A method for single-handed microgesture input comprises receiving hand tracking data for a hand of a user. A set of microgesture targets that include software functions are assigned to positions along a length of a first finger. A visual affordance including indicators for two or more assigned microgesture targets is provided to the user. The received hand tracking data is analyzed by a gesture recognition machine. A location of a thumbtip of the hand of the user is determined relative to the positions along the first finger. Responsive to determining that the thumbtip is within a threshold distance of the first finger at a first position along the length of the first finger, an indicator for a corresponding first microgesture target is augmented, and then further augmented based on a duration the thumbtip is at the first position. Responsive to detecting a confirmation action, the corresponding microgesture target executes.Type: GrantFiled: November 30, 2020Date of Patent: February 15, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Noe Moreno Barragan, Michael Harley Notter, Sheng Kai Tang, Joshua Kyle Neff
-
Publication number: 20210383594Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: ApplicationFiled: August 23, 2021Publication date: December 9, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
-
Publication number: 20210373672Abstract: Examples are disclosed that relate to hand gesture-based emojis. One example provides, on a display device, a method comprising receiving hand tracking data representing a pose of a hand in a coordinate system, based on the hand tracking data, recognizing a hand gesture, and identifying an emoji corresponding to the hand gesture. The method further comprises presenting the emoji on the display device, and sending an instruction to one or more other display devices to present the emoji.Type: ApplicationFiled: May 29, 2020Publication date: December 2, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Julia SCHWARZ, Michael Harley NOTTER, Jenny KAM, Sheng Kai TANG, Kenneth Mitchell JAKUBZAK, Adam Edwin BEHRINGER, Amy Mun HONG, Joshua Kyle NEFF, Sophie STELLMACH, Mathew J. LAMB, Nicholas Ferianc KAMUDA
-
Patent number: 11107265Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: March 11, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Publication number: 20210255767Abstract: The discussion relates to virtual keyboard engagement. One example can define key volumes relating to keys of a virtual keyboard and detect finger movement of a user through individual key volumes. The example can detect parameter changes associated with detected finger movement through individual key volumes and build potential key sequences from detected parameter changes.Type: ApplicationFiled: May 5, 2021Publication date: August 19, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Christopher M. BECKER, Nazeeh A. ELDIRGHAMI, Kevin W. BARNES, Julia SCHWARZ, Eric CARTER
-
Patent number: 11048355Abstract: A method and apparatus for determining pitch and yaw of an elongated interface object as it interacts with a touchscreen surface. A touch image is received, and this touch image has at least a first area that corresponds to an area of the touchscreen that has an elongated interface object positioned at least proximate to it. The elongated interface object has a pitch and a yaw with respect to the touchscreen surface. A first transformation is performed to obtain a first transformation image of the touch image, and a second transformation is performed to obtain a second transformation image of the touch image. The first transformation differs from the second transformation. The yaw is determined for the elongated interface object based on both the first and second transformation images. The pitch is determined based on at least one of the first and second transformation images.Type: GrantFiled: July 26, 2017Date of Patent: June 29, 2021Assignee: QEEXO, CO.Inventors: Christopher Harrison, Julia Schwarz, Robert Bo Xiao
-
Method for improving accuracy of touch screen event analysis by use of spatiotemporal touch patterns
Patent number: 11029785Abstract: A method of classifying touch screen events uses known non-random patterns of touch events over short periods of time to increase the accuracy of analyzing such events. The method takes advantage of the fact that after one touch event, certain actions are more likely to follow than others. Thus if a touch event is classified as a knock, and then within 500 ms a new event in a similar location occurs, but the classification confidence is low (e.g., 60% nail, 40% knuckle), the classifier may add weight to the knuckle classification since this touch sequence is far more likely. Knowledge about the probabilities of follow-on touch events can be used to bias subsequent classification, adding weight to particular events.Type: GrantFiled: February 21, 2020Date of Patent: June 8, 2021Assignee: QEEXO, CO.Inventors: Julia Schwarz, Christopher Harrison -
Patent number: 11029845Abstract: The discussion relates to virtual keyboard engagement. One example can define key volumes relating to keys of a virtual keyboard and detect finger movement of a user through individual key volumes. The example can detect parameter changes associated with detected finger movement through individual key volumes and build potential key sequences from detected parameter changes.Type: GrantFiled: July 11, 2019Date of Patent: June 8, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Christopher M. Becker, Nazeeh A. Eldirghami, Kevin W. Barnes, Julia Schwarz, Eric Carter
-
Patent number: 10969957Abstract: An electronic device includes a touch-sensitive surface, for example a touch pad or touch screen. The user interacts with the touch-sensitive surface, producing touch interactions. The resulting actions taken depend at least in part on the touch type. For example, the same touch interactions performed by three different touch types of a finger pad, a finger nail and a knuckle, may result in the execution of different actions.Type: GrantFiled: February 12, 2020Date of Patent: April 6, 2021Assignee: QEEXO, CO.Inventors: Christopher Harrison, Julia Schwarz, Robert Bo Xiao
-
Patent number: 10969937Abstract: Systems and methods are provided for controlling the position of an interactive movable menu in a mixed-reality environment. In some instances, a mixed-reality display device presents a mixed-reality environment to a user. The mixed-reality device then detects a first gesture associated with a user controller while presenting the mixed-reality environment and, in response to the first gesture, triggers a display of an interactive movable menu within the mixed-reality environment as a tethered hologram that is dynamically moved within the mixed-reality environment relative to and corresponding with movement of the user controller within the mixed-reality environment. Then, in response to a second detected gesture, the mixed-reality device selectively locks a display of the interactive movable menu at a fixed position that is not tethered to the user controller.Type: GrantFiled: March 11, 2019Date of Patent: April 6, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Julia Schwarz, Casey Leon Meekhof, Alon Farchy, Sheng Kai Tang, Nicholas F. Kamuda
-
Patent number: 10949029Abstract: A system for classifying touch events of different interaction layers includes a touch screen configured to display an interactive element, one or more vibro-acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the one or more vibro-acoustic sensors and to save vibro-acoustic signals sensed by the one or more vibro acoustic sensors, wherein the touch event detector is further configured to detect touch events in which the interactive element is touched by a first or a second finger part of a user, and wherein the touch events result in generating the vibro-acoustic signals, and a vibro-acoustic classifier is configured to classify the vibro-acoustic signals and activate corresponding functions in the different layers dependent upon which finger part is used.Type: GrantFiled: January 15, 2017Date of Patent: March 16, 2021Assignee: QEEXO, CO.Inventors: Chris Harrison, Julia Schwarz, Leandro Damian Zungri
-
Publication number: 20210041971Abstract: A system for classifying touch events of different interaction layers includes a touch screen configured to display an interactive element, one or more vibro-acoustic sensors coupled to the touch screen, a touch event detector configured to monitor the one or more vibro-acoustic sensors and to save vibro-acoustic signals sensed by the one or more vibro acoustic sensors, wherein the touch event detector is further configured to detect touch events in which the interactive element is touched by a first or a second finger part of a user, and wherein the touch events result in generating the vibro-acoustic signals, and a vibro-acoustic classifier is configured to classify the vibro-acoustic signals and activate corresponding functions in the different layers dependent upon which finger part is used.Type: ApplicationFiled: January 15, 2017Publication date: February 11, 2021Inventors: CHRIS HARRISON, JULIA SCHWARZ, LEANDRO DAMIAN ZUNGRI
-
Patent number: 10901495Abstract: Systems and methods are provided for detecting user input in a mixed-reality environment being rendered with one or more holograms. An input receiver that includes a plurality of input elements is presented. An input controller that includes the individual actuators is identified, wherein each corresponding actuator of the individual actuators is configured to, when interacting with one or more input elements of the input receiver that are mapped to the corresponding actuator and when the input state of the corresponding actuator is an active state, provide user input within the mixed-reality environment. Subsequently, the presence of a triggering attribute of the input controller is detected, and in response to detecting the triggering attribute, the input state of a corresponding actuator is changed from an inactive state to an active state for providing user input.Type: GrantFiled: March 2, 2019Date of Patent: January 26, 2021Assignee: MICROSOFTTECHNOLOGY LICENSING, LLCInventor: Julia Schwarz
-
Publication number: 20210011621Abstract: The discussion relates to virtual keyboard engagement. One example can define key volumes relating to keys of a virtual keyboard and detect finger movement of a user through individual key volumes. The example can detect parameter changes associated with detected finger movement through individual key volumes and build potential key sequences from detected parameter changes.Type: ApplicationFiled: July 11, 2019Publication date: January 14, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Christopher M. BECKER, Nazeeh A. ELDIRGHAMI, Kevin W. BARNES, Julia SCHWARZ, Eric CARTER