Patents by Inventor Kenneth Liam KIEMELE
Kenneth Liam KIEMELE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190342696Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.Type: ApplicationFiled: May 4, 2018Publication date: November 7, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Liam KIEMELE, Donna Katherine LONG, Bryant Daniel HAWTHORNE, Anthony ERNST, Kendall Clark YORK, Jeffrey SIPKO, Janet Lynn SCHNEIDER, Christian Michael SADAK, Stephen G. LATTA
-
Publication number: 20190340567Abstract: A wearable computing device is provided, comprising a camera and a microphone operatively coupled to a processor. Using both camera image data and speech recognition data, an object is detected and classified as an inventory item and inventory event. The inventory item and inventory event are subsequently recorded into an inventory database. Classifiers used to determine the inventory item and inventory event from the image data and speech may be cross trained based on the relative confidence values associated with each.Type: ApplicationFiled: June 26, 2018Publication date: November 7, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Donna Katherine LONG, Kenneth Liam KIEMELE, Jennifer Jean CHOI, Jamie R. CABACCANG, John Benjamin HESKETH, Bryant Daniel HAWTHORNE, George Oliver JOHNSTON, Anthony ERNST
-
Patent number: 10455351Abstract: Examples are disclosed that relate to devices and methods for sharing geo-located information between different devices. In one example, a method comprises receiving the geo-located information from a first user device having a first data type compatible with a first output component of the device, receiving first sensor data from the first device, determining a location of the geo-located information within a coordinate system in a physical environment, determining that a second user device is located in the physical environment, determining that the second device does not comprise an output component that is compatible with the first data type, transforming the geo-located information into a second data type compatible with a second output component of the second device, determining that the second device is proximate to the location of the geo-located information, and sending the geo-located information to the second device for output by the second output component.Type: GrantFiled: May 4, 2018Date of Patent: October 22, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kenneth Liam Kiemele, Donna Katherine Long, Bryant Daniel Hawthorne, Anthony Ernst, Kendall Clark York, Jeffrey Sipko, Janet Lynn Schneider, Christian Michael Sadak, Stephen G. Latta
-
Publication number: 20190312917Abstract: The herein described technology facilitates web-based co-presence collaboration conferences with user presence indicators to convey actions of users relative to a shared resource. A method for conducting a web-based co-presence collaboration conference includes selecting a form factor for a user presence indicator associated with an action of a first user identified based on data collected at one or more environmental sensors of a first co-presence collaboration device displaying the shared resource. The method further includes transmitting a presentation instruction to a second co-presence collaboration device displaying the shared resource concurrently with the first co-presence collaboration device. The presentation instruction instructs the second co-presence collaboration device to display the user presence indicator a select position relative to the shared resource and according to the selected form factor.Type: ApplicationFiled: April 5, 2018Publication date: October 10, 2019Inventors: Jennifer Jean CHOI, Jamie R. CABACCANG, Kenneth Liam KIEMELE, Priya GANADAS
-
Publication number: 20190287296Abstract: A wearable device is configured with a one-dimensional depth sensor (e.g., a LIDAR system) that scans a physical environment, in which the wearable device and depth sensor generate a point cloud structure using scanned points of the physical environment to develop blueprints for a negative space of the environment. The negative space includes permanent structures (e.g., walls and floors), in which the blueprints distinguish permanent structures from temporary objects. The depth sensor is affixed in a static position on the wearable device and passively scans a room according to the gaze direction of the user. Over a period of days, weeks, months, or years the blueprint continues to supplement the point cloud structure and update points therein. Thus, as the user continues to navigate the physical environment, over time, the point cloud data structure develops an accurate blueprint of the environment.Type: ApplicationFiled: March 16, 2018Publication date: September 19, 2019Inventors: Jeffrey SIPKO, Kendall Clark YORK, John Benjamin HESKETH, Kenneth Liam KIEMELE, Bryant Daniel Hawthorne
-
Publication number: 20190272138Abstract: A computing system is provided, including a plurality of display devices including at least a first display device and a second display device. The computing system may further include one or more sensors configured to detect a first positional state of the first display device relative to the second display device and at least one user. The first positional state may include an angular orientation of the first display device relative to the second display device. The computing system may further include a processor configured to receive the first positional state from the one or more sensors. The processor may be further configured to generate first graphical content based at least in part on the first positional state. The processor may be further configured to transmit the first graphical content for display at the first display device.Type: ApplicationFiled: March 5, 2018Publication date: September 5, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Aaron D. KRAUSS, Jamie R. CABACCANG, Jennifer J. CHOI, Michelle Tze Hiang CHUA, Priya GANADAS, Donna Katherine LONG, Kenneth Liam KIEMELE
-
Publication number: 20190243921Abstract: A computer system is provided that includes a server configured to store a plurality of location accounts, each location account being associated with a physical space at a recorded geospatial location. The plurality of location accounts utilize shared data definitions of a physical space parameter. Each physical space is equipped with a corresponding on-premise sensor configured to detect measured values for the physical space parameter over time and send to the server a data stream indicating the measured values. The computer system further includes a network portal, via which an authorized user for a location account can selectively choose whether to share the measured values or a summary thereof with other location accounts via the network portal.Type: ApplicationFiled: February 5, 2018Publication date: August 8, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Donna Katherine LONG, Jennifer Jean CHOI, Priya GANADAS, Jamie R. CABACCANG, LaSean Tee SMITH, Kenneth Liam KIEMELE, Evan L. JONES, John Benjamin HESKETH, Bryant Daniel HAWTHORNE
-
Publication number: 20190220697Abstract: Techniques for generating a machine learning model to detect event instances from physical sensor data, including applying a first machine learning model to first sensor data from a first physical sensor at a location to detect an event instance, determining that a performance metric for use of the first machine learning model is not within an expected parameter, obtaining second sensor data from a second physical sensor during a period of time at the same location as the first physical sensor, obtaining third sensor data from the first physical sensor during the period of time, generating location-specific training data by selecting portions of the third sensor data based on training event instances detected using the second sensor data, training a second ML model using the location-specific training data, and applying the second ML model instead of the first ML model for detecting event instances.Type: ApplicationFiled: January 12, 2018Publication date: July 18, 2019Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kenneth Liam KIEMELE, John Benjamin HESKETH, Evan Lewis JONES, James Lewis NANCE, LaSean Tee SMITH
-
APPARATUS AND METHODS OF AUTOMATED TRACKING AND COUNTING OF OBJECTS ON A RESOURCE-CONSTRAINED DEVICE
Publication number: 20190122381Abstract: The present disclosure provides apparatus and methods for automated tracking and counting of objects in a set of image frames using a resource-constrained device based on analysis of a selected subset of image frames, and based on selectively timing when resource-intensive operations are performed.Type: ApplicationFiled: February 21, 2018Publication date: April 25, 2019Inventors: Donna Katherine LONG, Arthur Charles TOMLIN, Kenneth Liam KIEMELE, John Benjamin HESKETH -
Publication number: 20190065026Abstract: Examples are disclosed herein that relate to receiving virtual reality input. An example provides a head-mounted display device comprising a sensor system, a display, a logic machine, and a storage machine holding instructions executable by the logic machine. The instructions are executable to execute a 3D virtual reality experience on the head-mounted display device, track, via the sensor system, a touch-sensitive input device, render, on the display, in a 3D location in the 3D virtual reality experience based on the tracking of the touch-sensitive input device, a user interface, receive, via a touch sensor of the touch-sensitive input device, a user input, and, in response to receiving the user input, control the 3D virtual reality experience to thereby vary visual content being rendered on the display.Type: ApplicationFiled: August 24, 2017Publication date: February 28, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Liam KIEMELE, Michael Robert THOMAS, Alexandre DA VEIGA, Christian Michael SADAK, Bryant Daniel HAWTHORNE, Aaron D. KRAUSS, Aaron Mackay BURNS
-
Publication number: 20190041651Abstract: Optimizations are provided for facilitating an improved transition between a real-world environment and a virtual reality environment. Initially, use of a HMD is detected and one or more real-world physical objects within a threshold proximity to the HMD are identified. Subsequently, a replicated environment, which includes virtual representations of the real-world physical object(s), is obtained and rendered in a virtual reality display. The replicated environment is transitioned out of view and a VR environment is subsequently rendered in the virtual reality display. In some instances, rendering of virtual representations of real-world physical objects into the VR environment occurs is response to detected triggering event.Type: ApplicationFiled: August 2, 2017Publication date: February 7, 2019Inventors: Kenneth Liam Kiemele, Michael Thomas, Charles W. Lapp, III, Christian Michael Sadak, Thomas Salter
-
Publication number: 20180348518Abstract: Tracking a user head position detects a change to a new head position and, in response, a remote camera is instructed to move to a next camera position. A camera image frame, having an indication of camera position, is received from the camera. Upon the camera position not aligning with the next camera position, an assembled image frame is formed, using image data from past views, and rendered to appear to the user as if the camera moved in 1:1 alignment with the user's head to the next camera position.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexandre DA VEIGA, Roger Sebastian Kevin SYLVAN, Kenneth Liam KIEMELE, Nikolai Michael FAALAND, Aaron Mackay BURNS
-
Patent number: 10139631Abstract: Tracking a user head position detects a change to a new head position and, in response, a remote camera is instructed to move to a next camera position. A camera image frame, having an indication of camera position, is received from the camera. Upon the camera position not aligning with the next camera position, an assembled image frame is formed, using image data from past views, and rendered to appear to the user as if the camera moved in 1:1 alignment with the user's head to the next camera position.Type: GrantFiled: June 5, 2017Date of Patent: November 27, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Alexandre Da Veiga, Roger Sebastian Kevin Sylvan, Kenneth Liam Kiemele, Nikolai Michael Faaland, Aaron Mackay Burns
-
Publication number: 20180332205Abstract: To address issues of capturing and processing images, a mobile computing device is provided. The mobile computing device may include a two-part housing coupled by a hinge, with first and second parts that include first and second displays, respectively. The hinge may permit the displays to rotate throughout a plurality of angular orientations. The mobile computing device may include one or more sensor devices, processor, first camera, and second camera mounted in the housing. The one or more sensor devices may be configured to measure the relative angular displacement of the housing, and the processor may be configured to process images captured by the first and second cameras according to a selected function based upon the relative angular displacement measured by the one or more sensor devices.Type: ApplicationFiled: June 23, 2017Publication date: November 15, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Bryant Daniel HAWTHORNE, Mario Emmanuel MALTEZOS, Christian Michael SADAK, John Benjamin HESKETH, Andrew Austin JACKSON, Adolfo HERNANDEZ SANTISTEBAN, Kenneth Liam KIEMELE, Charlene JEUNE, Jeffrey R. SIPKO
-
Publication number: 20180329521Abstract: To address the issues of presentation display, a mobile computing device is provided. The mobile computing device may include a two-part housing coupled by a hinge, with first and second parts that include first and second displays, respectively. The displays may rotate around the hinge throughout a plurality of angular orientations. The mobile computing device may include an angle sensor, one or more inertial measurement units, and a processor mounted in the housing. The angle sensor may detect a relative angular orientation of the first and second displays, and the inertial measurement unit may measure a spatial orientation of the device, which together define a posture of the device. The processor may be configured to execute an application program and, based on the posture of the device, select a display mode of the application program that defines a layout of graphical user interface elements displayed on the displays.Type: ApplicationFiled: June 27, 2017Publication date: November 15, 2018Applicant: Microsoft Technology Licensing, LLCInventors: John Benjamin HESKETH, Mario Emmanuel MALTEZOS, Kenneth Liam KIEMELE, Aaron D. KRAUSS, Charles W. LAPP, III, Charlene JEUNE, Bryant Daniel HAWTHORNE, Jeffrey R. SIPKO