Patents by Inventor Julia Schwarz

Julia Schwarz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200225736
    Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.
    Type: Application
    Filed: March 8, 2019
    Publication date: July 16, 2020
    Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
  • Publication number: 20200226814
    Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
  • Publication number: 20200225757
    Abstract: Systems and methods are provided for detecting user-object interaction in mixed-reality environments. A mixed-reality system detects a controller gesture with an associated controller orientation in the mixed-reality environment. The mixed-reality system then determines an interaction region for the controller gesture and identifies one or more virtual objects within the interaction region. The virtual objects each have an associated orientation affinity. Subsequently, the mixed-reality system determines an orientation similarity score between the controller orientation and the orientation affinity for each virtual object within the interaction region. In response to determining that at least one orientation similarity score exceeds a predetermined threshold, the mixed-reality system executes an interaction between the controller and the virtual object that has the greatest orientation similarity score.
    Type: Application
    Filed: March 8, 2019
    Publication date: July 16, 2020
    Inventors: Julia Schwarz, Jason Michael Ray, Casey Leon Meekhof
  • Publication number: 20200225830
    Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.
    Type: Application
    Filed: March 25, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
  • Publication number: 20200225813
    Abstract: Systems and methods are provided for controlling the position of an interactive movable menu in a mixed-reality environment. In some instances, a mixed-reality display device presents a mixed-reality environment to a user. The mixed-reality device then detects a first gesture associated with a user controller while presenting the mixed-reality environment and, in response to the first gesture, triggers a display of an interactive movable menu within the mixed-reality environment as a tethered hologram that is dynamically moved within the mixed-reality environment relative to and corresponding with movement of the user controller within the mixed-reality environment. Then, in response to a second detected gesture, the mixed-reality device selectively locks a display of the interactive movable menu at a fixed position that is not tethered to the user controller.
    Type: Application
    Filed: March 11, 2019
    Publication date: July 16, 2020
    Inventors: Julia Schwarz, Casey Leon Meekhof, Alon Farchy, Sheng Kai Tang, Nicholas F. Kamuda
  • Publication number: 20200225758
    Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.
    Type: Application
    Filed: March 26, 2019
    Publication date: July 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE
  • Publication number: 20200225735
    Abstract: Systems and methods are provided for detecting user input in a mixed-reality environment being rendered with one or more holograms. An input receiver that includes a plurality of input elements is presented. An input controller that includes the individual actuators is identified, wherein each corresponding actuator of the individual actuators is configured to, when interacting with one or more input elements of the input receiver that are mapped to the corresponding actuator and when the input state of the corresponding actuator is an active state, provide user input within the mixed-reality environment. Subsequently, the presence of a triggering attribute of the input controller is detected, and in response to detecting the triggering attribute, the input state of a corresponding actuator is changed from an inactive state to an active state for providing user input.
    Type: Application
    Filed: March 2, 2019
    Publication date: July 16, 2020
    Inventor: Julia Schwarz
  • Patent number: 10712858
    Abstract: Touch sensitive devices, methods and computer readable recording mediums are provided that allow for improved classification of objects against a touch sensitive surface of a touch sensitive device based upon analysis of subdivisions of data representing contact with the touch sensitive surface during a period of time.
    Type: Grant
    Filed: April 12, 2015
    Date of Patent: July 14, 2020
    Assignee: Qeexo, Co.
    Inventors: Julia Schwarz, Robert Bo Xiao, Chris Harrison
  • Publication number: 20200209996
    Abstract: A method of classifying touch screen events uses known non-random patterns of touch events over short periods of time to increase the accuracy of analyzing such events. The method takes advantage of the fact that after one touch event, certain actions are more likely to follow than others. Thus if a touch event is classified as a knock, and then within 500 ms a new event in a similar location occurs, but the classification confidence is low (e.g., 60% nail, 40% knuckle), the classifier may add weight to the knuckle classification since this touch sequence is far more likely. Knowledge about the probabilities of follow-on touch events can be used to bias subsequent classification, adding weight to particular events.
    Type: Application
    Filed: February 21, 2020
    Publication date: July 2, 2020
    Applicant: QEEXO, CO.
    Inventors: Julia Schwarz, Christopher HARRISON
  • Publication number: 20200183527
    Abstract: An electronic device includes a touch-sensitive surface, for example a touch pad or touch screen. The user interacts with the touch-sensitive surface, producing touch interactions. The resulting actions taken depend at least in part on the touch type. For example, the same touch interactions performed by three different touch types of a finger pad, a finger nail and a knuckle, may result in the execution of different actions.
    Type: Application
    Filed: February 12, 2020
    Publication date: June 11, 2020
    Applicant: QEEXO, CO.
    Inventors: Christopher Harrison, Julia Schwarz, Robert Bo Xiao
  • Patent number: 10642419
    Abstract: The disclosed subject matter is a palm rejection technique utilizing temporal features, iterative classification, and probabilistic voting. Touch events are classified based on features periodically extracted from time windows of increasing size, always centered at the birth of the event. The classification process uses a series of decision trees acting on said features.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: May 5, 2020
    Assignee: Carnegie Mellon University
    Inventors: Julia Schwarz, Chris Harrison
  • Patent number: 10642407
    Abstract: An apparatus classifies touch events. The apparatus includes a touch sensitive surface configured to generate a touch event when an object touches the touch sensitive surface. The touch event entails a mechanical vibration upon contact with the surface. The apparatus includes a touch event detector configured to detect the onset of a touch, and a touch event classifier configured to classify the touch event to identify the object used for the touch event. The mechanical vibration is created via any one of finger parts including a tip, a pad, a fingernail, and a knuckle, each of which has a unique feature different from each other.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: May 5, 2020
    Assignee: Carnegie Mellon University
    Inventors: Christopher Harrison, Julia Schwarz, Scott E. Hudson
  • Publication number: 20200125207
    Abstract: A method and apparatus for determining pitch and yaw of anelongated interface object as it interacts with a touchscreen surface. A touch image is received, and this touch image has at least a first area that corresponds to an area of the touchscreen that has an elongated interface object positioned at least proximate to it. The elongated interface object has a pitch and a yaw with respect to the touchscreen surface. A first transformation is performed to obtain a first transformation image of the touch image, and a second transformation is performed to obtain a second transformation image of the touch image. The first transformation differs from the second transformation. The yaw is determined for the elongated interface object based on both the first and second transformation images. The pitch is determined based on at least one of the first and second transformation images.
    Type: Application
    Filed: July 26, 2017
    Publication date: April 23, 2020
    Inventors: CHRISTOPHER HARRISON, JULIA SCHWARZ, ROBERT BO XIAO
  • Patent number: 10606417
    Abstract: A method of classifying touch screen events uses known non-random patterns of touch events over short periods of time to increase the accuracy of analyzing such events. The method takes advantage of the fact that after one touch event, certain actions are more likely to follow than others. Thus if a touch event is classified as a knock, and then within 500 ms a new event in a similar location occurs, but the classification confidence is low (e.g., 60% nail, 40% knuckle), the classifier may add weight to the knuckle classification since this touch sequence is far more likely. Knowledge about the probabilities of follow-on touch events can be used to bias subsequent classification, adding weight to particular events.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: March 31, 2020
    Assignee: QEEXO, CO.
    Inventors: Julia Schwarz, Chris Harrison
  • Patent number: 10599251
    Abstract: Some embodiments of the present invention include a method of differentiating touch screen users based on characterization of features derived from the touch event acoustics and mechanical impact and includes detecting a touch event on a touch sensitive surface, generating a vibro-acoustic waveform signal using at least one sensor detecting such touch event, converting the waveform signal into at least a domain signal, extracting distinguishing features from said domain signal, and classifying said features to associate the features of the domain signal with a particular user.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: March 24, 2020
    Assignee: QEEXO, CO.
    Inventors: Julia Schwarz, Chris Harrison
  • Patent number: 10599250
    Abstract: An electronic device includes a touch-sensitive surface, for example a touch pad or touch screen. The user interacts with the touch-sensitive surface, producing touch interactions. The resulting actions taken depend at least in part on the touch type. For example, the same touch interactions performed by three different touch types of a finger pad, a finger nail and a knuckle, may result in the execution of different actions.
    Type: Grant
    Filed: May 6, 2013
    Date of Patent: March 24, 2020
    Assignee: QEEXO, CO.
    Inventors: Christopher Harrison, Julia Schwarz, Robert Bo Xiao
  • Patent number: 10564761
    Abstract: Methods and apparatuses are provided for determining a pitch and yaw of an elongated interface object relative to a proximity sensitive surface. In one aspect, a proximity image is received having proximity image data from which it can be determined which areas of the proximity sensitive surface sensed the elongated interface object during a period of time. A proximity blob is identified in the proximity image and the proximity image is transformed using a plurality of different transformations to obtain a plurality of differently transformed proximity images. A plurality of features is determined for the identified blob in the transformed proximity images and the pitch of the elongated interface object relative to the proximity sensitive surface is determined based upon the determined features and a multi-dimensional heuristic regression model of the proximity sensitive surface; and a yaw is determined based upon the pitch.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: February 18, 2020
    Assignee: Qeexo, Co.
    Inventors: Christopher Harrison, Julia Schwarz, Robert Bo Xiao
  • Publication number: 20200012341
    Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.
    Type: Application
    Filed: July 9, 2018
    Publication date: January 9, 2020
    Inventors: Sophie STELLMACH, Sheng Kai TANG, Casey Leon MEEKHOF, Julia SCHWARZ, Nahil Tawfik SHARKASI, Thomas Matthew GABLE
  • Patent number: 10496155
    Abstract: A virtual reality experience is provided to one or more users by a computing system through the use of a special-purpose virtual reality mat. The computing system receives image data from an optical sensor imaging a physical environment. The mat includes one or more fiducial markers that are recognizable by the computing system. A presence of these fiducial markers is detected based on the image data. An activity region within the physical environment is defined based, at least in part, on the detected fiducial markers. A positioning of a physical subject is identified within the physical environment relative to the activity region. The virtual reality experience is selectively augmented based on the positioning of the physical subject identified relative to the activity region.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: December 3, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Julia Schwarz, Jason Michael Ray
  • Publication number: 20190324529
    Abstract: A method for improving user interaction with a virtual environment includes measuring a first position of the user's gaze relative to a virtual element, selecting the virtual element in the virtual environment at an origin when the user's gaze overlaps the virtual element, measuring a second position of a user's gaze relative to the virtual element, presenting a visual placeholder at a second position of the user's gaze when the second position of the user's gaze is beyond a threshold distance from the origin, and moving the visual placeholder relative to a destination using a secondary input device.
    Type: Application
    Filed: April 20, 2018
    Publication date: October 24, 2019
    Inventors: Sophie STELLMACH, Casey Leon MEEKHOF, Julia SCHWARZ