Patents by Inventor Jeffrey Sipko

Jeffrey Sipko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220256304
    Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.
    Type: Application
    Filed: April 26, 2022
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
  • Patent number: 11343633
    Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: May 24, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Patent number: 10825241
    Abstract: A wearable device is configured with a one-dimensional depth sensor (e.g., a LIDAR system) that scans a physical environment, in which the wearable device and depth sensor generate a point cloud structure using scanned points of the physical environment to develop blueprints for a negative space of the environment. The negative space includes permanent structures (e.g., walls and floors), in which the blueprints distinguish permanent structures from temporary objects. The depth sensor is affixed in a static position on the wearable device and passively scans a room according to the gaze direction of the user. Over a period of days, weeks, months, or years the blueprint continues to supplement the point cloud structure and update points therein. Thus, as the user continues to navigate the physical environment, over time, the point cloud data structure develops an accurate blueprint of the environment.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: November 3, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jeffrey Sipko, Kendall Clark York, John Benjamin Hesketh, Kenneth Liam Kiemele, Bryant Daniel Hawthorne
  • Publication number: 20200288263
    Abstract: A wearable spatial audio device is provided. The wearable spatial audio device includes one or more audio speakers, one or more processors, and a storage machine holding instructions executable by the one or more processors. Map data is obtained for a real-world environment that includes one or more dynamic audio objects. A device-specific subset of audio tracks is obtained, and a device-specific spatialized audio mix of the device-specific subset of audio tracks that is based on the map data is obtained. An indication of a change in an environmental condition relative to one or more of the dynamic audio objects is received. The device-specific spatialized audio mix is adjusted based on the change in the environmental condition.
    Type: Application
    Filed: May 27, 2020
    Publication date: September 10, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles TOMLIN, Kendall Clark YORK, Jeffrey SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Andrew Frederick MUEHLHAUSEN
  • Patent number: 10694311
    Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: June 23, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Patent number: 10674305
    Abstract: The disclosed technology provides multi-dimensional audio output by providing a relative physical location of an audio transmitting device relative to an audio outputting device in a shared map of physical space shared between the audio transmitting device and the audio outputting device. An orientation of the audio outputting device relative to the audio transmitting device is determined and an audio signal received from the audio transmitting device via a communication network is processed using the determined orientation of the audio outputting device relative to the audio transmitting device and the relative physical location of the audio transmitting device to create an augmented audio signal. The augmented audio signal is output through at least one audio output on the audio outputting device in a manner indicating a relative physical direction of the audio transmitting device to the audio outputting device in the shared map of the physical space.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: June 2, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kendall Clark York, Jeffrey Sipko, Aaron Krauss, Andrew F. Muehlhausen, Adolfo Hernandez Santisteban, Arthur C. Tomlin
  • Patent number: 10645525
    Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: May 5, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kenneth Liam Kiemele, Donna Katherine Long, Bryant Daniel Hawthorne, Anthony Ernst, Kendall Clark York, Jeffrey Sipko, Janet Lynn Schneider, Christian Michael Sadak, Stephen G. Latta
  • Patent number: 10591974
    Abstract: A mobile computing device is provided that includes a processor, an accelerometer, two or more display devices, and a housing including the processor, the accelerometer, and the two or more display devices, determine a current user focus indicating that a first display device of the pair of display devices is being viewed by the user, and that a second display device of the pair of display devices is not being viewed by the user, detect a signature gesture input based on accelerometer data received via the accelerometer detecting that the mobile computing device has been rotated more than a threshold degree, determine that the current user focus has changed from the first display device to the second display device based on at least detecting the signature gesture input, and perform a predetermined action based on the current user focus.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: March 17, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexandre da Veiga, Roger Sebastian Sylvan, Jeffrey Sipko
  • Patent number: 10571279
    Abstract: A wearable device is configured with various sensory devices that recurrently monitor and gather data for a physical environment surrounding a user to help locate and track real-world objects. The various heterogeneous sensory devices digitize objects and the physical world. Each sensory device is configured with a threshold data change, in which, when the data picked up by one or more sensory devices surpasses the threshold, a query is performed on each sensor graph or sensory device. The queried sensor graph data is stored within a node in a spatial graph, in which nodes are connected to each other using edges to create spatial relationships between objects and spaces. Objects can be uploaded into an object graph associated with the spatial graph, in which the objects are digitized with each of the available sensors. This digital information can be subsequently used to, for example, locate the object.
    Type: Grant
    Filed: May 2, 2019
    Date of Patent: February 25, 2020
    Assignee: Microsoft Technology Licensing LLC
    Inventors: Jeffrey Sipko, Kendall Clark York, John Benjamin Hesketh, Hubert Van Hoof
  • Publication number: 20190349706
    Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.
    Type: Application
    Filed: July 24, 2019
    Publication date: November 14, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Liam KIEMELE, Donna Katherine LONG, Bryant Daniel HAWTHORNE, Anthony ERNST, Kendall Clark YORK, Jeffrey SIPKO, Janet Lynn SCHNEIDER, Christian Michael SADAK, Stephen G. LATTA
  • Publication number: 20190342696
    Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.
    Type: Application
    Filed: May 4, 2018
    Publication date: November 7, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Liam KIEMELE, Donna Katherine LONG, Bryant Daniel HAWTHORNE, Anthony ERNST, Kendall Clark YORK, Jeffrey SIPKO, Janet Lynn SCHNEIDER, Christian Michael SADAK, Stephen G. LATTA
  • Patent number: 10455351
    Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: October 22, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kenneth Liam Kiemele, Donna Katherine Long, Bryant Daniel Hawthorne, Anthony Ernst, Kendall Clark York, Jeffrey Sipko, Janet Lynn Schneider, Christian Michael Sadak, Stephen G. Latta
  • Publication number: 20190285417
    Abstract: A wearable device is configured with various sensory devices that recurrently monitor and gather data for a physical environment surrounding a user to help locate and track real-world objects. The various heterogeneous sensory devices digitize objects and the physical world. Each sensory device is configured with a threshold data change, in which, when the data picked up by one or more sensory devices surpasses the threshold, a query is performed on each sensor graph or sensory device. The queried sensor graph data is stored within a node in a spatial graph, in which nodes are connected to each other using edges to create spatial relationships between objects and spaces. Objects can be uploaded into an object graph associated with the spatial graph, in which the objects are digitized with each of the available sensors. This digital information can be subsequently used to, for example, locate the object.
    Type: Application
    Filed: May 2, 2019
    Publication date: September 19, 2019
    Inventors: Jeffrey SIPKO, Kendall Clark YORK, John Benjamin HESKETH, Hubert VAN HOOF
  • Publication number: 20190289416
    Abstract: The disclosed technology provides multi-dimensional audio output by providing a relative physical location of an audio transmitting device relative to an audio outputting device in a shared map of physical space shared between the audio transmitting device and the audio outputting device. An orientation of the audio outputting device relative to the audio transmitting device is determined and an audio signal received from the audio transmitting device via a communication network is processed using the determined orientation of the audio outputting device relative to the audio transmitting device and the relative physical location of the audio transmitting device to create an augmented audio signal. The augmented audio signal is output through at least one audio output on the audio outputting device in a manner indicating a relative physical direction of the audio transmitting device to the audio outputting device in the shared map of the physical space.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Inventors: Kendall Clark YORK, Jeffrey SIPKO, Aaron KRAUSS, Andrew F. MUEHLHAUSEN, Adolfo HERNANDEZ SANTISTEBAN, Arthur C. Tomlin
  • Publication number: 20190286217
    Abstract: A mobile computing device is provided that includes a processor, an accelerometer, two or more display devices, and a housing including the processor, the accelerometer, and the two or more display devices, determine a current user focus indicating that a first display device of the pair of display devices is being viewed by the user, and that a second display device of the pair of display devices is not being viewed by the user, detect a signature gesture input based on accelerometer data received via the accelerometer detecting that the mobile computing device has been rotated more than a threshold degree, determine that the current user focus has changed from the first display device to the second display device based on at least detecting the signature gesture input, and perform a predetermined action based on the current user focus.
    Type: Application
    Filed: June 6, 2019
    Publication date: September 19, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Alexandre da Veiga, Roger Sebastian Sylvan, Jeffrey Sipko
  • Publication number: 20190289417
    Abstract: Examples are disclosed relating to providing spatialized audio to multiple users. In one example, a computing device presents spatialized audio to multiple users within an environment via communicative connection to one or more wearable spatial audio output devices. For each communicatively connected wearable spatial audio output device, a user-specific subset of audio tracks is generated from a set of audio tracks for a dynamic audio object positioned within the environment based on one or more user-specific parameters. A location of the wearable spatial audio output device is determined relative to the dynamic audio object, and based upon this location, a device-specific spatialized audio mix is generated that includes the user-specific subset of audio tracks. The device-specific spatialized audio mixes are sent to the wearable spatial output devices, and playback of the device-specific spatialized audio mixes are synchronously initiated at each wearable spatial audio output device.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Arthur Charles Tomlin, Kendall Clark York, Jeffrey Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Andrew Frederick Muehlhausen
  • Publication number: 20190287296
    Abstract: A wearable device is configured with a one-dimensional depth sensor (e.g., a LIDAR system) that scans a physical environment, in which the wearable device and depth sensor generate a point cloud structure using scanned points of the physical environment to develop blueprints for a negative space of the environment. The negative space includes permanent structures (e.g., walls and floors), in which the blueprints distinguish permanent structures from temporary objects. The depth sensor is affixed in a static position on the wearable device and passively scans a room according to the gaze direction of the user. Over a period of days, weeks, months, or years the blueprint continues to supplement the point cloud structure and update points therein. Thus, as the user continues to navigate the physical environment, over time, the point cloud data structure develops an accurate blueprint of the environment.
    Type: Application
    Filed: March 16, 2018
    Publication date: September 19, 2019
    Inventors: Jeffrey SIPKO, Kendall Clark YORK, John Benjamin HESKETH, Kenneth Liam KIEMELE, Bryant Daniel Hawthorne
  • Publication number: 20190289396
    Abstract: The disclosed technology provides a spatial output device comprised of two electronics enclosures that are electrically connected by a flexible electronic connector. The two electronics enclosures are weighted to maintain a balanced position of the flexible connector against a support. The spatial output device has at least one input sensor affixed to one of the two electronics enclosures and an onboard processor affixed to one of the two electronics enclosures. The input sensor is configured to receive monocular input. The onboard processor is configured to process the monocular input to generate a spatial output, where the spatial output provides at least two-dimensional information.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Inventors: Kendall Clark YORK, John B. HESKETH, Janet SCHNEIDER, Jeffrey SIPKO
  • Publication number: 20190212877
    Abstract: A computing device is provided that includes a primary display and a secondary display operatively coupled to a processor. The processor may be configured to execute an application program that has a GUI with a single display mode and a selectively displayable multiple display mode. The multiple display mode may include at least a primary view and a secondary view. In a single display mode, the processor may be configured to initially display the GUI on the primary display and not display the GUI on the secondary display. Upon receiving a multidisplay command to display the GUI in the multiple display mode, the processor may transition the application to the multiple display mode in which the primary view is displayed on the primary display, and the secondary view is displayed on the secondary display.
    Type: Application
    Filed: January 10, 2018
    Publication date: July 11, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jeffrey SIPKO, Christian M. SADAK, Aaron D. KRAUSS, John Benjamin HESKETH, Timothy D. KVIZ
  • Patent number: 10331190
    Abstract: A mobile computing device is provided that includes a processor, an accelerometer, two or more display devices, and a housing including the processor, the accelerometer, and the two or more display devices, determine a current user focus indicating that a first display device of the pair of display devices is being viewed by the user, and that a second display device of the pair of display devices is not being viewed by the user, detect a signature gesture input based on accelerometer data received via the accelerometer detecting that the mobile computing device has been rotated more than a threshold degree, determine that the current user focus has changed from the first display device to the second display device based on at least detecting the signature gesture input, and perform a predetermined action based on the current user focus.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: June 25, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexandre da Veiga, Roger Sebastian Sylvan, Jeffrey Sipko