Patents by Inventor Andrew Frederick Muehlhausen

Andrew Frederick Muehlhausen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220256304
    Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.
    Type: Application
    Filed: April 26, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
  • Patent number: 11343633
    Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Publication number: 20200288263
    Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.
    Type: Application
    Filed: May 27, 2020
    Publication date: September 10, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
  • Patent number: 10740389
    Abstract: Methods and devices for creating a sound log of activities may include receiving a detected sound from at least one sensor on a computer device. The methods and devices may include comparing the detected sound to a plurality of audio patterns stored in a sound database. The methods and devices may include identifying a sound event for the detected sound based at least upon the comparison of the detected sound to the plurality of audio patterns. The methods and devices may include identifying context information that provides a context for the sound event. The methods and devices may include updating a sound log with the sound event and the context information.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: August 11, 2020
    Assignee: Microsoft Technology Licensing, KKC
    Inventors: Priya Ganadas, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Patent number: 10694311
    Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: June 23, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Patent number: 10672103
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 10630965
    Abstract: Examples are disclosed herein that relate to calibrating a user's eye for a stereoscopic display. One example provides, on a head-mounted display device including a see-through display, a method of calibrating a stereoscopic display for a user's eyes, the method including for a first eye, receiving an indication of alignment of a user-controlled object with a first eye reference object viewable via the head-mounted display device from a perspective of the first eye, determining a first ray intersecting the user-controlled object and the first eye reference object from the perspective of the first eye, and determining a position of the first eye based on the first ray. The method further includes repeating such steps for a second eye, determining a position of the second eye based on a second ray, and calibrating the stereoscopic display based on the position of the first eye and the position of the second eye.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: April 21, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Robert Thomas Held, Anatolie Gavriliuc, Riccardo Giraldi, Szymon P. Stachniak, Andrew Frederick Muehlhausen, Maxime Ouellet
  • Patent number: 10606609
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: March 31, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Publication number: 20200019242
    Abstract: Examples are disclosed that relate to evoking an emotion and/or other an expression of an avatar via a gesture and/or posture sensed by a wearable device. One example provides a computing device including a logic subsystem and memory storing instructions executable by the logic subsystem to receive, from a wearable device configured to be worn on a hand of a user, an input of data indicative of one or more of a gesture and a posture. The instructions are further executable to, based on the input of data received, determine a digital personal expression corresponding to the one or more of the gesture and the posture, and output the digital personal expression.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 16, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Charlene Mary ATLAS, Sean Kenneth MCBETH, Andrew Frederick MUEHLHAUSEN, Kenneth Mitchell JAKUBZAK
  • Patent number: 10481856
    Abstract: A mobile computing device including a housing having a first part and a second part, the first part including a first display and a first forward facing camera, and the second part including a second display and a second forward facing camera, at least one speaker mounted in the housing, and a processor mounted in the housing and configured to display a first graphical user interface element having an associated first audio stream on the first display and to display a second graphical user interface element having an associated second audio stream on the second display, wherein the processor is configured to perform face detection on a first and a second image, adjust an audio setting based on a result of the face detection, and play the first and second audio streams out of the at least one speaker based on the adjusted audio setting.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: November 19, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Christian Michael Sadak, Adolfo Hernandez Santisteban, Andrew Frederick Muehlhausen
  • Publication number: 20190318033
    Abstract: Methods and devices for creating a sound log of activities may include receiving a detected sound from at least one sensor on a computer device. The methods and devices may include comparing the detected sound to a plurality of audio patterns stored in a sound database. The methods and devices may include identifying a sound event for the detected sound based at least upon the comparison of the detected sound to the plurality of audio patterns. The methods and devices may include identifying context information that provides a context for the sound event. The methods and devices may include updating a sound log with the sound event and the context information.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 17, 2019
    Inventors: Priya Ganadas, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Publication number: 20190289417
    Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Publication number: 20190279335
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Application
    Filed: May 30, 2019
    Publication date: September 12, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 10317505
    Abstract: A computer system is provided that includes one or more processors configured to receive a stream of data from a plurality of network connected devices configured to measure physical parameters, and store a user profile including user settings for a plurality of notification subscriptions associated with physical parameters measured by the plurality of network connected devices. Each notification subscription includes programming logic for a trigger condition for a candidate notification based on measured physical parameters and an associated component sound for the candidate notification. The one or more processors are further configured to determine that trigger conditions for a plurality of candidate notifications are met based on the received stream of data, and generate a composite sound output including a plurality of component sounds associated with the plurality of notifications rendered.
    Type: Grant
    Filed: March 29, 2018
    Date of Patent: June 11, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey Sipko, Adolfo Hernandez Santisteban, Priya Ganadas, Ishac Bertran, Andrew Frederick Muehlhausen, Aaron Daniel Krauss
  • Publication number: 20190171463
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Application
    Filed: February 11, 2019
    Publication date: June 6, 2019
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Patent number: 10311543
    Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: June 4, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
  • Patent number: 10249095
    Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: April 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
  • Patent number: 10186086
    Abstract: An augmented reality head-mounted device includes a gaze detector, a camera, and a communication interface. The gaze detector determines a gaze vector of an eye of a wearer of the augmented reality head-mounted device. The camera images a physical space including a display of a computing device. The communication interface sends a control signal to the computing device in response to a wearer input. The control signal indicates a location at which the gaze vector intersects the display and useable by the computing device to adjust operation of the computing device.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: January 22, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Riccardo Giraldi, Anatolie Gavriliuc, Michelle Chua, Andrew Frederick Muehlhausen, Robert Thomas Held, Joseph van den Heuvel
  • Patent number: 10176820
    Abstract: A visualization system with audio capability includes one or more display devices, one or more microphones, one or more speakers, and audio processing circuitry. While a display device displays an image to a user, a microphone inputs an utterance of the user, or a sound from the user's environment, and provides it to the audio processing circuitry. The audio processing circuitry processes the utterance (or other sound) in real-time to add an audio effect associated with the image to increase realism, and outputs the processed utterance (or other sound) to the user via the speaker in real-time, with very low latency.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: January 8, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andrew Frederick Muehlhausen, Matthew Johnston, Kasson Crooker
  • Publication number: 20180329672
    Abstract: A mobile computing device including a housing having a first part and a second part, the first part including a first display and a first forward facing camera, and the second part including a second display and a second forward facing camera, at least one speaker mounted in the housing, and a processor mounted in the housing and configured to display a first graphical user interface element having an associated first audio stream on the first display and to display a second graphical user interface element having an associated second audio stream on the second display, wherein the processor is configured to perform face detection on a first and a second image, adjust an audio setting based on a result of the face detection, and play the first and second audio streams out of the at least one speaker based on the adjusted audio setting.
    Type: Application
    Filed: June 30, 2017
    Publication date: November 15, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Christian Michael SADAK, Adolfo HERNANDEZ SANTISTEBAN, Andrew Frederick MUEHLHAUSEN