Patents by Inventor Aaron Daniel Krauss
Aaron Daniel Krauss has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220256304Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: ApplicationFiled: April 26, 2022Publication date: August 11, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
-
Patent number: 11343633Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: GrantFiled: May 27, 2020Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10921446Abstract: Generally, a scanning device performs a sonic scan of a space by generating an ultrasonic impulse and measuring reflected signals as raw audio data. Sonic scan data including raw audio data and an associated scan location is forwarded to a sonic mapping service, which generates and distributes a 3D map of the space called a sonic map. When multiple devices contribute, the map is a collaborative sonic map. The sonic mapping service is advantageously available as distributed computing service, and can detect acoustic characteristics of the space and/or attribute visual/audio features to elements of a 3D model based on a corresponding detected acoustic characteristic. Various implementations that utilize a sonic map, detected acoustic characteristics, an impacted visual map, and/or an impacted 3D object include mixed reality communications, automatic calibration, relocalization, visualizing materials, rendering 3D geometry, and the like.Type: GrantFiled: April 6, 2018Date of Patent: February 16, 2021Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jeffrey Ryan Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Priya Ganadas, Arthur C. Tomlin
-
Publication number: 20200288263Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: ApplicationFiled: May 27, 2020Publication date: September 10, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
-
Patent number: 10748343Abstract: A computer device is provided that includes an input device, a sensor device, a display device, and a processor. The processor is configured to detect a physical object in a physical environment based on sensor data received via the sensor device, measure one or more physical parameters of the physical object based on the sensor data, determine a physical behavior of the physical object based on the measured one or more physical parameters, present a graphical representation of the physical behavior of the physical object via the display device, generate a simulation of the physical behavior of the physical object based on the measured one or more physical parameters, receive a user input to modify the one or more physical parameters for the simulation via the input device, and present the simulation with the modified one or more physical parameters via the display device.Type: GrantFiled: July 16, 2018Date of Patent: August 18, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Anthony Robert Menard, Donna Katherine Long, James Michael Ratliff, Aaron Daniel Krauss, Evan L. Jones
-
Patent number: 10740389Abstract: Methods and devices for creating a sound log of activities may include receiving a detected sound from at least one sensor on a computer device. The methods and devices may include comparing the detected sound to a plurality of audio patterns stored in a sound database. The methods and devices may include identifying a sound event for the detected sound based at least upon the comparison of the detected sound to the plurality of audio patterns. The methods and devices may include identifying context information that provides a context for the sound event. The methods and devices may include updating a sound log with the sound event and the context information.Type: GrantFiled: April 12, 2018Date of Patent: August 11, 2020Assignee: Microsoft Technology Licensing, KKCInventors: Priya Ganadas, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10694311Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.Type: GrantFiled: March 15, 2018Date of Patent: June 23, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Publication number: 20200020166Abstract: A computer device is provided that includes an input device, a sensor device, a display device, and a processor. The processor is configured to detect a physical object in a physical environment based on sensor data received via the sensor device, measure one or more physical parameters of the physical object based on the sensor data, determine a physical behavior of the physical object based on the measured one or more physical parameters, present a graphical representation of the physical behavior of the physical object via the display device, generate a simulation of the physical behavior of the physical object based on the measured one or more physical parameters, receive a user input to modify the one or more physical parameters for the simulation via the input device, and present the simulation with the modified one or more physical parameters via the display device.Type: ApplicationFiled: July 16, 2018Publication date: January 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Anthony Robert MENARD, Donna Katherine LONG, James Michael RATLIFF, Aaron Daniel KRAUSS, Evan L. JONES
-
Publication number: 20190318033Abstract: Methods and devices for creating a sound log of activities may include receiving a detected sound from at least one sensor on a computer device. The methods and devices may include comparing the detected sound to a plurality of audio patterns stored in a sound database. The methods and devices may include identifying a sound event for the detected sound based at least upon the comparison of the detected sound to the plurality of audio patterns. The methods and devices may include identifying context information that provides a context for the sound event. The methods and devices may include updating a sound log with the sound event and the context information.Type: ApplicationFiled: April 12, 2018Publication date: October 17, 2019Inventors: Priya Ganadas, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Publication number: 20190310366Abstract: Generally, a scanning device performs a sonic scan of a space by generating an ultrasonic impulse and measuring reflected signals as raw audio data. Sonic scan data including raw audio data and an associated scan location is forwarded to a sonic mapping service, which generates and distributes a 3D map of the space called a sonic map. When multiple devices contribute, the map is a collaborative sonic map. The sonic mapping service is advantageously available as distributed computing service, and can detect acoustic characteristics of the space and/or attribute visual/audio features to elements of a 3D model based on a corresponding detected acoustic characteristic. Various implementations that utilize a sonic map, detected acoustic characteristics, an impacted visual map, and/or an impacted 3D object include mixed reality communications, automatic calibration, relocalization, visualizing materials, rendering 3D geometry, and the like.Type: ApplicationFiled: April 6, 2018Publication date: October 10, 2019Inventors: Jeffrey Ryan SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Priya GANADAS, Arthur C. TOMLIN
-
Publication number: 20190289417Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10317505Abstract: A computer system is provided that includes one or more processors configured to receive a stream of data from a plurality of network connected devices configured to measure physical parameters, and store a user profile including user settings for a plurality of notification subscriptions associated with physical parameters measured by the plurality of network connected devices. Each notification subscription includes programming logic for a trigger condition for a candidate notification based on measured physical parameters and an associated component sound for the candidate notification. The one or more processors are further configured to determine that trigger conditions for a plurality of candidate notifications are met based on the received stream of data, and generate a composite sound output including a plurality of component sounds associated with the plurality of notifications rendered.Type: GrantFiled: March 29, 2018Date of Patent: June 11, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jeffrey Sipko, Adolfo Hernandez Santisteban, Priya Ganadas, Ishac Bertran, Andrew Frederick Muehlhausen, Aaron Daniel Krauss
-
Publication number: 20190102953Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.Type: ApplicationFiled: November 16, 2018Publication date: April 4, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
-
Patent number: 10176641Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.Type: GrantFiled: October 20, 2016Date of Patent: January 8, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
-
Patent number: 9952656Abstract: Disclosed are a method and corresponding apparatus to enable a user of a display system to manipulate holographic objects. Multiple holographic user interface objects capable of being independently manipulated by a user are displayed to the user, overlaid on a real-world view of a 3D physical space in which the user is located. In response to a first user action, the holographic user interface objects are made to appear to be combined into a holographic container object that appears at a first location in the 3D physical space. In response to the first user action or a second user action, the holographic container object is made to appear to relocate to a second location in the 3D physical space. The holographic user interface objects are then made to appear to deploy from the holographic container object when the holographic container object appears to be located at the second location.Type: GrantFiled: August 21, 2015Date of Patent: April 24, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Adam Gabriel Poulos, Cameron Graeme Brown, Aaron Daniel Krauss, Marcus Ghaly, Michael Thomas, Jonathan Paulovich, Daniel Joseph McCulloch
-
Publication number: 20170270715Abstract: Examples disclosed relate to displaying virtual objects. One example provides, on a display device comprising a camera and a display, a method comprising acquiring, via the camera, image data imaging an environment, receiving a user input requesting display of a three-dimensional virtual object, comparing dimensional information for the three-dimensional virtual object to dimensional information for a field of view of the display device, modifying the three-dimensional virtual object based upon comparing the dimensional information for the three-dimensional virtual object to the dimensional information for the field of view to obtain a modified three-dimensional virtual object, and displaying the modified three-dimensional virtual object via the display.Type: ApplicationFiled: October 20, 2016Publication date: September 21, 2017Inventors: Megan Ann Lindsay, Michael Scavezze, Aaron Daniel Krauss, Michael Thomas, Richard Wifall, Jeffrey David Smith, Cameron Brown, Charlene Jeune, Cheyne Rory Quin Mathey-Owens
-
Publication number: 20170052507Abstract: Disclosed are a method and corresponding apparatus to enable a user of a display system to manipulate holographic objects. Multiple holographic user interface objects capable of being independently manipulated by a user are displayed to the user, overlaid on a real-world view of a 3D physical space in which the user is located. In response to a first user action, the holographic user interface objects are made to appear to be combined into a holographic container object that appears at a first location in the 3D physical space. In response to the first user action or a second user action, the holographic container object is made to appear to relocate to a second location in the 3D physical space. The holographic user interface objects are then made to appear to deploy from the holographic container object when the holographic container object appears to be located at the second location.Type: ApplicationFiled: August 21, 2015Publication date: February 23, 2017Inventors: Adam Gabriel Poulos, Cameron Graeme Brown, Aaron Daniel Krauss, Marcus Ghaly, Michael Thomas, Jonathan Paulovich, Daniel Joseph McCulloch