Patents by Inventor Andrew Frederick Muehlhausen
Andrew Frederick Muehlhausen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220256304Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: ApplicationFiled: April 26, 2022Publication date: August 11, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
-
Patent number: 11343633Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: GrantFiled: May 27, 2020Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Publication number: 20200288263Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: ApplicationFiled: May 27, 2020Publication date: September 10, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
-
Patent number: 10740389Abstract: Methods and devices for creating a sound log of activities may include receiving a detected sound from at least one sensor on a computer device. The methods and devices may include comparing the detected sound to a plurality of audio patterns stored in a sound database. The methods and devices may include identifying a sound event for the detected sound based at least upon the comparison of the detected sound to the plurality of audio patterns. The methods and devices may include identifying context information that provides a context for the sound event. The methods and devices may include updating a sound log with the sound event and the context information.Type: GrantFiled: April 12, 2018Date of Patent: August 11, 2020Assignee: Microsoft Technology Licensing, KKCInventors: Priya Ganadas, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10694311Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.Type: GrantFiled: March 15, 2018Date of Patent: June 23, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10672103Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: GrantFiled: May 30, 2019Date of Patent: June 2, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Patent number: 10630965Abstract: Examples are disclosed herein that relate to calibrating a user's eye for a stereoscopic display. One example provides, on a head-mounted display device including a see-through display, a method of calibrating a stereoscopic display for a user's eyes, the method including for a first eye, receiving an indication of alignment of a user-controlled object with a first eye reference object viewable via the head-mounted display device from a perspective of the first eye, determining a first ray intersecting the user-controlled object and the first eye reference object from the perspective of the first eye, and determining a position of the first eye based on the first ray. The method further includes repeating such steps for a second eye, determining a position of the second eye based on a second ray, and calibrating the stereoscopic display based on the position of the first eye and the position of the second eye.Type: GrantFiled: October 2, 2015Date of Patent: April 21, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Robert Thomas Held, Anatolie Gavriliuc, Riccardo Giraldi, Szymon P. Stachniak, Andrew Frederick Muehlhausen, Maxime Ouellet
-
Patent number: 10606609Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: GrantFiled: February 11, 2019Date of Patent: March 31, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Publication number: 20200019242Abstract: Examples are disclosed that relate to evoking an emotion and/or other an expression of an avatar via a gesture and/or posture sensed by a wearable device. One example provides a computing device including a logic subsystem and memory storing instructions executable by the logic subsystem to receive, from a wearable device configured to be worn on a hand of a user, an input of data indicative of one or more of a gesture and a posture. The instructions are further executable to, based on the input of data received, determine a digital personal expression corresponding to the one or more of the gesture and the posture, and output the digital personal expression.Type: ApplicationFiled: July 12, 2018Publication date: January 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Charlene Mary ATLAS, Sean Kenneth MCBETH, Andrew Frederick MUEHLHAUSEN, Kenneth Mitchell JAKUBZAK
-
Patent number: 10481856Abstract: A mobile computing device including a housing having a first part and a second part, the first part including a first display and a first forward facing camera, and the second part including a second display and a second forward facing camera, at least one speaker mounted in the housing, and a processor mounted in the housing and configured to display a first graphical user interface element having an associated first audio stream on the first display and to display a second graphical user interface element having an associated second audio stream on the second display, wherein the processor is configured to perform face detection on a first and a second image, adjust an audio setting based on a result of the face detection, and play the first and second audio streams out of the at least one speaker based on the adjusted audio setting.Type: GrantFiled: June 30, 2017Date of Patent: November 19, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Christian Michael Sadak, Adolfo Hernandez Santisteban, Andrew Frederick Muehlhausen
-
Publication number: 20190318033Abstract: Methods and devices for creating a sound log of activities may include receiving a detected sound from at least one sensor on a computer device. The methods and devices may include comparing the detected sound to a plurality of audio patterns stored in a sound database. The methods and devices may include identifying a sound event for the detected sound based at least upon the comparison of the detected sound to the plurality of audio patterns. The methods and devices may include identifying context information that provides a context for the sound event. The methods and devices may include updating a sound log with the sound event and the context information.Type: ApplicationFiled: April 12, 2018Publication date: October 17, 2019Inventors: Priya Ganadas, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Publication number: 20190289417Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Publication number: 20190279335Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: ApplicationFiled: May 30, 2019Publication date: September 12, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Patent number: 10317505Abstract: A computer system is provided that includes one or more processors configured to receive a stream of data from a plurality of network connected devices configured to measure physical parameters, and store a user profile including user settings for a plurality of notification subscriptions associated with physical parameters measured by the plurality of network connected devices. Each notification subscription includes programming logic for a trigger condition for a candidate notification based on measured physical parameters and an associated component sound for the candidate notification. The one or more processors are further configured to determine that trigger conditions for a plurality of candidate notifications are met based on the received stream of data, and generate a composite sound output including a plurality of component sounds associated with the plurality of notifications rendered.Type: GrantFiled: March 29, 2018Date of Patent: June 11, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jeffrey Sipko, Adolfo Hernandez Santisteban, Priya Ganadas, Ishac Bertran, Andrew Frederick Muehlhausen, Aaron Daniel Krauss
-
Publication number: 20190171463Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: ApplicationFiled: February 11, 2019Publication date: June 6, 2019Inventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Patent number: 10311543Abstract: A method for moving a virtual object includes displaying a virtual object and moving the virtual object based on a user input. Based on the user input attempting to move the virtual object in violation of an obstacle, displaying a collision indicator and an input indicator. The collision indicator is moved based on user input and movement constraints imposed by the obstacle. The input indicator is moved based on user input without movement constraints imposed by the obstacle.Type: GrantFiled: October 27, 2016Date of Patent: June 4, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Semih Energin, Sergio Paolantonio, David Evans, Eric Scott Rehmeyer, Robert Thomas Held, Maxime Ouellet, Anatolie Gavriliuc, Riccardo Giraldi, Andrew Frederick Muehlhausen
-
Patent number: 10249095Abstract: A technique is described herein for presenting notifications associated with applications in a context-based manner. In one implementation, the technique maintains a data store that provides application annotation information that describes a plurality of anchors. For instance, the application annotation information for an illustrative anchor identifies: a location at which the anchor is virtually placed in an interactive world; an application associated with the anchor; and triggering information that describes a set of one or more triggering conditions to be satisfied to enable presentation of a notification pertaining to the application. In use, the technique presents the notification pertaining to the application in prescribed proximity to the anchor when it is determined that the user's engagement with the interactive world satisfies the anchor's set of triggering conditions. The triggering conditions can specify any combination of spatial factors, temporal factors, user co-presence factors, etc.Type: GrantFiled: April 7, 2017Date of Patent: April 2, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Semih Energin, Anatolie Gavriliuc, Robert Thomas Held, Maxime Ouellet, Riccardo Giraldi, Andrew Frederick Muehlhausen, Sergio Paolantonio
-
Patent number: 10186086Abstract: An augmented reality head-mounted device includes a gaze detector, a camera, and a communication interface. The gaze detector determines a gaze vector of an eye of a wearer of the augmented reality head-mounted device. The camera images a physical space including a display of a computing device. The communication interface sends a control signal to the computing device in response to a wearer input. The control signal indicates a location at which the gaze vector intersects the display and useable by the computing device to adjust operation of the computing device.Type: GrantFiled: September 2, 2015Date of Patent: January 22, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Riccardo Giraldi, Anatolie Gavriliuc, Michelle Chua, Andrew Frederick Muehlhausen, Robert Thomas Held, Joseph van den Heuvel
-
Patent number: 10176820Abstract: A visualization system with audio capability includes one or more display devices, one or more microphones, one or more speakers, and audio processing circuitry. While a display device displays an image to a user, a microphone inputs an utterance of the user, or a sound from the user's environment, and provides it to the audio processing circuitry. The audio processing circuitry processes the utterance (or other sound) in real-time to add an audio effect associated with the image to increase realism, and outputs the processed utterance (or other sound) to the user via the speaker in real-time, with very low latency.Type: GrantFiled: January 6, 2017Date of Patent: January 8, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Andrew Frederick Muehlhausen, Matthew Johnston, Kasson Crooker
-
Publication number: 20180329672Abstract: A mobile computing device including a housing having a first part and a second part, the first part including a first display and a first forward facing camera, and the second part including a second display and a second forward facing camera, at least one speaker mounted in the housing, and a processor mounted in the housing and configured to display a first graphical user interface element having an associated first audio stream on the first display and to display a second graphical user interface element having an associated second audio stream on the second display, wherein the processor is configured to perform face detection on a first and a second image, adjust an audio setting based on a result of the face detection, and play the first and second audio streams out of the at least one speaker based on the adjusted audio setting.Type: ApplicationFiled: June 30, 2017Publication date: November 15, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Christian Michael SADAK, Adolfo HERNANDEZ SANTISTEBAN, Andrew Frederick MUEHLHAUSEN