Patents by Inventor Jeffrey Sipko
Jeffrey Sipko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220256304Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: ApplicationFiled: April 26, 2022Publication date: August 11, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
-
Patent number: 11343633Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: GrantFiled: May 27, 2020Date of Patent: May 24, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10825241Abstract: A wearable device is configured with a one-dimensional depth sensor (e.g., a LIDAR system) that scans a physical environment, in which the wearable device and depth sensor generate a point cloud structure using scanned points of the physical environment to develop blueprints for a negative space of the environment. The negative space includes permanent structures (e.g., walls and floors), in which the blueprints distinguish permanent structures from temporary objects. The depth sensor is affixed in a static position on the wearable device and passively scans a room according to the gaze direction of the user. Over a period of days, weeks, months, or years the blueprint continues to supplement the point cloud structure and update points therein. Thus, as the user continues to navigate the physical environment, over time, the point cloud data structure develops an accurate blueprint of the environment.Type: GrantFiled: March 16, 2018Date of Patent: November 3, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Jeffrey Sipko, Kendall Clark York, John Benjamin Hesketh, Kenneth Liam Kiemele, Bryant Daniel Hawthorne
-
Publication number: 20200288263Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.Type: ApplicationFiled: May 27, 2020Publication date: September 10, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
-
Patent number: 10694311Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.Type: GrantFiled: March 15, 2018Date of Patent: June 23, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Patent number: 10674305Abstract: The disclosed technology provides multi-dimensional audio output by providing a relative physical location of an audio transmitting device relative to an audio outputting device in a shared map of physical space shared between the audio transmitting device and the audio outputting device. An orientation of the audio outputting device relative to the audio transmitting device is determined and an audio signal received from the audio transmitting device via a communication network is processed using the determined orientation of the audio outputting device relative to the audio transmitting device and the relative physical location of the audio transmitting device to create an augmented audio signal. The augmented audio signal is output through at least one audio output on the audio outputting device in a manner indicating a relative physical direction of the audio transmitting device to the audio outputting device in the shared map of the physical space.Type: GrantFiled: March 15, 2018Date of Patent: June 2, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Kendall Clark York, Jeffrey Sipko, Aaron Krauss, Andrew F. Muehlhausen, Adolfo Hernandez Santisteban, Arthur C. Tomlin
-
Patent number: 10645525Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.Type: GrantFiled: July 24, 2019Date of Patent: May 5, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kenneth Liam Kiemele, Donna Katherine Long, Bryant Daniel Hawthorne, Anthony Ernst, Kendall Clark York, Jeffrey Sipko, Janet Lynn Schneider, Christian Michael Sadak, Stephen G. Latta
-
Patent number: 10591974Abstract: A mobile computing device is provided that includes a processor, an accelerometer, two or more display devices, and a housing including the processor, the accelerometer, and the two or more display devices, determine a current user focus indicating that a first display device of the pair of display devices is being viewed by the user, and that a second display device of the pair of display devices is not being viewed by the user, detect a signature gesture input based on accelerometer data received via the accelerometer detecting that the mobile computing device has been rotated more than a threshold degree, determine that the current user focus has changed from the first display device to the second display device based on at least detecting the signature gesture input, and perform a predetermined action based on the current user focus.Type: GrantFiled: June 6, 2019Date of Patent: March 17, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexandre da Veiga, Roger Sebastian Sylvan, Jeffrey Sipko
-
Patent number: 10571279Abstract: A wearable device is configured with various sensory devices that recurrently monitor and gather data for a physical environment surrounding a user to help locate and track real-world objects. The various heterogeneous sensory devices digitize objects and the physical world. Each sensory device is configured with a threshold data change, in which, when the data picked up by one or more sensory devices surpasses the threshold, a query is performed on each sensor graph or sensory device. The queried sensor graph data is stored within a node in a spatial graph, in which nodes are connected to each other using edges to create spatial relationships between objects and spaces. Objects can be uploaded into an object graph associated with the spatial graph, in which the objects are digitized with each of the available sensors. This digital information can be subsequently used to, for example, locate the object.Type: GrantFiled: May 2, 2019Date of Patent: February 25, 2020Assignee: Microsoft Technology Licensing LLCInventors: Jeffrey Sipko, Kendall Clark York, John Benjamin Hesketh, Hubert Van Hoof
-
Publication number: 20190349706Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.Type: ApplicationFiled: July 24, 2019Publication date: November 14, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Liam KIEMELE, Donna Katherine LONG, Bryant Daniel HAWTHORNE, Anthony ERNST, Kendall Clark YORK, Jeffrey SIPKO, Janet Lynn SCHNEIDER, Christian Michael SADAK, Stephen G. LATTA
-
Publication number: 20190342696Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.Type: ApplicationFiled: May 4, 2018Publication date: November 7, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Liam KIEMELE, Donna Katherine LONG, Bryant Daniel HAWTHORNE, Anthony ERNST, Kendall Clark YORK, Jeffrey SIPKO, Janet Lynn SCHNEIDER, Christian Michael SADAK, Stephen G. LATTA
-
Patent number: 10455351Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.Type: GrantFiled: May 4, 2018Date of Patent: October 22, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kenneth Liam Kiemele, Donna Katherine Long, Bryant Daniel Hawthorne, Anthony Ernst, Kendall Clark York, Jeffrey Sipko, Janet Lynn Schneider, Christian Michael Sadak, Stephen G. Latta
-
Publication number: 20190285417Abstract: A wearable device is configured with various sensory devices that recurrently monitor and gather data for a physical environment surrounding a user to help locate and track real-world objects. The various heterogeneous sensory devices digitize objects and the physical world. Each sensory device is configured with a threshold data change, in which, when the data picked up by one or more sensory devices surpasses the threshold, a query is performed on each sensor graph or sensory device. The queried sensor graph data is stored within a node in a spatial graph, in which nodes are connected to each other using edges to create spatial relationships between objects and spaces. Objects can be uploaded into an object graph associated with the spatial graph, in which the objects are digitized with each of the available sensors. This digital information can be subsequently used to, for example, locate the object.Type: ApplicationFiled: May 2, 2019Publication date: September 19, 2019Inventors: Jeffrey SIPKO, Kendall Clark YORK, John Benjamin HESKETH, Hubert VAN HOOF
-
Publication number: 20190289416Abstract: The disclosed technology provides multi-dimensional audio output by providing a relative physical location of an audio transmitting device relative to an audio outputting device in a shared map of physical space shared between the audio transmitting device and the audio outputting device. An orientation of the audio outputting device relative to the audio transmitting device is determined and an audio signal received from the audio transmitting device via a communication network is processed using the determined orientation of the audio outputting device relative to the audio transmitting device and the relative physical location of the audio transmitting device to create an augmented audio signal. The augmented audio signal is output through at least one audio output on the audio outputting device in a manner indicating a relative physical direction of the audio transmitting device to the audio outputting device in the shared map of the physical space.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Inventors: Kendall Clark YORK, Jeffrey SIPKO, Aaron KRAUSS, Andrew F. MUEHLHAUSEN, Adolfo HERNANDEZ SANTISTEBAN, Arthur C. Tomlin
-
Publication number: 20190286217Abstract: A mobile computing device is provided that includes a processor, an accelerometer, two or more display devices, and a housing including the processor, the accelerometer, and the two or more display devices, determine a current user focus indicating that a first display device of the pair of display devices is being viewed by the user, and that a second display device of the pair of display devices is not being viewed by the user, detect a signature gesture input based on accelerometer data received via the accelerometer detecting that the mobile computing device has been rotated more than a threshold degree, determine that the current user focus has changed from the first display device to the second display device based on at least detecting the signature gesture input, and perform a predetermined action based on the current user focus.Type: ApplicationFiled: June 6, 2019Publication date: September 19, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Alexandre da Veiga, Roger Sebastian Sylvan, Jeffrey Sipko
-
Publication number: 20190289417Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
-
Publication number: 20190287296Abstract: A wearable device is configured with a one-dimensional depth sensor (e.g., a LIDAR system) that scans a physical environment, in which the wearable device and depth sensor generate a point cloud structure using scanned points of the physical environment to develop blueprints for a negative space of the environment. The negative space includes permanent structures (e.g., walls and floors), in which the blueprints distinguish permanent structures from temporary objects. The depth sensor is affixed in a static position on the wearable device and passively scans a room according to the gaze direction of the user. Over a period of days, weeks, months, or years the blueprint continues to supplement the point cloud structure and update points therein. Thus, as the user continues to navigate the physical environment, over time, the point cloud data structure develops an accurate blueprint of the environment.Type: ApplicationFiled: March 16, 2018Publication date: September 19, 2019Inventors: Jeffrey SIPKO, Kendall Clark YORK, John Benjamin HESKETH, Kenneth Liam KIEMELE, Bryant Daniel Hawthorne
-
Publication number: 20190289396Abstract: The disclosed technology provides a spatial output device comprised of two electronics enclosures that are electrically connected by a flexible electronic connector. The two electronics enclosures are weighted to maintain a balanced position of the flexible connector against a support. The spatial output device has at least one input sensor affixed to one of the two electronics enclosures and an onboard processor affixed to one of the two electronics enclosures. The input sensor is configured to receive monocular input. The onboard processor is configured to process the monocular input to generate a spatial output, where the spatial output provides at least two-dimensional information.Type: ApplicationFiled: March 15, 2018Publication date: September 19, 2019Inventors: Kendall Clark YORK, John B. HESKETH, Janet SCHNEIDER, Jeffrey SIPKO
-
Publication number: 20190212877Abstract: A computing device is provided that includes a primary display and a secondary display operatively coupled to a processor. The processor may be configured to execute an application program that has a GUI with a single display mode and a selectively displayable multiple display mode. The multiple display mode may include at least a primary view and a secondary view. In a single display mode, the processor may be configured to initially display the GUI on the primary display and not display the GUI on the secondary display. Upon receiving a multidisplay command to display the GUI in the multiple display mode, the processor may transition the application to the multiple display mode in which the primary view is displayed on the primary display, and the secondary view is displayed on the secondary display.Type: ApplicationFiled: January 10, 2018Publication date: July 11, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Jeffrey SIPKO, Christian M. SADAK, Aaron D. KRAUSS, John Benjamin HESKETH, Timothy D. KVIZ
-
Patent number: 10331190Abstract: A mobile computing device is provided that includes a processor, an accelerometer, two or more display devices, and a housing including the processor, the accelerometer, and the two or more display devices, determine a current user focus indicating that a first display device of the pair of display devices is being viewed by the user, and that a second display device of the pair of display devices is not being viewed by the user, detect a signature gesture input based on accelerometer data received via the accelerometer detecting that the mobile computing device has been rotated more than a threshold degree, determine that the current user focus has changed from the first display device to the second display device based on at least detecting the signature gesture input, and perform a predetermined action based on the current user focus.Type: GrantFiled: November 9, 2016Date of Patent: June 25, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexandre da Veiga, Roger Sebastian Sylvan, Jeffrey Sipko