Patents by Inventor Michael Edward HARNISCH

Michael Edward HARNISCH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10949272
    Abstract: The disclosed technology executes a next operation in a set of associated application windows. A first application window and a second application window are added to the set. A first context is generated from content from the first application window. A selection of the content is detected from first application window. The first context is communicated as input to the second application window, responsive to detecting the selection. The next operation in the second application window is executed using the first context as input to the next operation, responsive to communicating the first context.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: March 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Liang Chen, Michael Edward Harnisch, Jose Alberto Rodriguez, Steven Douglas Demar
  • Patent number: 10860088
    Abstract: Methods and devices for initiating application and system modal control of a computer device based on predicted locations of a hand outside of a field of view using a computer device are disclosed. The method includes receiving hand motion information from a hand positional tracking system that tracks a position of a hand of a user. The method also includes determining the hand is outside a field of view of the computer device based on the hand motion information. The method further includes predicting a location of the hand while outside the field of view. The method also includes interacting with a secondary UX on the display based on the predicted location of the hand.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: December 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Felix Gerard Torquil Ifor Andrew, Michael Edward Harnisch, Liang Chen
  • Patent number: 10824294
    Abstract: Various methods and systems, for implementing three-dimensional resource integration, are provided. 3D resource integration includes integration of 3D resources into different types of functionality, such as, operating system, file explorer, application and augmented reality functionality. In operation, an indication to perform an operation with a 3D object is received. One or more 3D resource controls, associated with the operation, are accessed. The 3D resource control is a defined set of instructions on how to integrate 3D resources with 3D objects for generating 3D-based graphical interfaces associated with application features and operating system features. An input based on one or more control elements of the one or more 3D resource controls is received. The input includes the one or more control elements that operate to generate a 3D-based graphical interface for the operation. Based on receiving the input, the operation is executed with the 3D object and the 3D-based graphical interface.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: November 3, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Enrico William Guld, Jason A. Carter, Heather Joanne Alekson, Andrew Jackson Klein, David J. W. Seymour, Kathleen P. Mulcahy, Charla M. Pereira, Evan Lewis Jones, William Axel Olsen, Adam Roy Mitchell, Daniel Lee Osborn, Zachary D. Wiesnoski, Struan Andrew Robertson, Michael Edward Harnisch, William Robert Schnurr, Helen Joan Hem Lam, Darren Alexander Bennett, Kin Hang Chu
  • Publication number: 20190384622
    Abstract: In at least one implementation, the disclosed technology provides a method including tracking user activity in a set of associated application windows include inactive application windows and at least one active application window executing an active application and generating a prediction of one or more next functions based on the tracked user activity in the set of associated application windows. The one or more next functions are functions of the active application. The method further includes surfacing the one or more next functions by presenting one or more controls to the one or more next functions in a separate contextual tool window of the computing device and detecting user selection of a control of the one or more presented next functions. The method further includes executing the next function corresponding to the selected control in the active application in the set of associated application windows.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Liang CHEN, Michael Edward HARNISCH, Jose Alberto RODRIGUEZ, Steven Douglas DEMAR
  • Publication number: 20190384657
    Abstract: The disclosed technology executes a next operation in a set of associated application windows. A first application window and a second application window are added to the set. A first context is generated from content from the first application window. A selection of the content is detected from first application window. The first context is communicated as input to the second application window, responsive to detecting the selection. The next operation in the second application window is executed using the first context as input to the next operation, responsive to communicating the first context.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Liang CHEN, Michael Edward HARNISCH, Jose Alberto RODRIGUEZ, Steven Douglas DEMAR
  • Publication number: 20190384460
    Abstract: The disclosed technology surfaces application functionality for an object in a user interface of a computing device. A context associated with the object is determined. A contextual tool window of the user interface presents the user interfaces for one or more functions of one or more applications, based on the context, without launching any of the one or more applications in an application window. Selection by a user of one of the presented one or more functions is detected through the contextual tool window in the user interface. The selected function is executed on the object without launching any of the one or more applications in an application window.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Michael Edward HARNISCH, Bojana OSTOJIC, Liang CHEN, Jose Alberto RODRIGUEZ, Steven Douglas DEMAR, Lori Beth KRATZER
  • Publication number: 20190384621
    Abstract: The disclosed technology predicts and presents a next operation for a set window of associated application windows. An operation prediction system adds multiple associated application windows to the set window, generates a prediction of one or more next operation options based on the associated application windows of the set window, presents one or more controls to the one or more next operation options in the user interface of the computing device, detects user selection of a control of the presented next operation options, executes in the set window the next operation option corresponding to the selected control, responsive to communicating the detecting operation.
    Type: Application
    Filed: June 14, 2018
    Publication date: December 19, 2019
    Inventors: Liang CHEN, Michael Edward HARNISCH, Jose Alberto RODRIGUEZ, Steven Douglas DEMAR
  • Publication number: 20190340821
    Abstract: The described technology provides for user-initiated re-mapping of virtual objects between different surfaces within a field-of-view of a user interacting with a processing device operating in a three-dimensional use mode. According to one implementation, a system disclosed herein includes a virtual content surface re-mapper stored in memory and executable by a processor to receive user input selecting one or more virtual objects presented on a virtual interface of an application; identify one or more surfaces within a field-of-view of the user that are external to the application; and present a surface selection prompt requesting user selection of one of the identified surfaces. Responsive to receipt of a surface selection received in response to the surface selection prompt, the virtual content surface re-mapper projects the one or more selected virtual objects onto a plane corresponding to a surface designated by the surface selection instruction.
    Type: Application
    Filed: May 24, 2018
    Publication date: November 7, 2019
    Inventors: Liang CHEN, Michael Edward HARNISCH, Jose Alberto RODRIGUEZ, Steven Douglas DEMAR
  • Publication number: 20190339767
    Abstract: Methods and devices for initiating application and system modal control of a computer device based on predicted locations of a hand outside of a field of view using a computer device are disclosed. The method includes receiving hand motion information from a hand positional tracking system that tracks a position of a hand of a user. The method also includes determining the hand is outside a field of view of the computer device based on the hand motion information. The method further includes predicting a location of the hand while outside the field of view. The method also includes interacting with a secondary UX on the display based on the predicted location of the hand.
    Type: Application
    Filed: August 31, 2018
    Publication date: November 7, 2019
    Inventors: Felix Gerard Torquil Ifor ANDREW, Michael Edward HARNISCH, Liang CHEN
  • Publication number: 20180113597
    Abstract: Various methods and systems, for implementing three-dimensional resource integration, are provided. 3D resource integration includes integration of 3D resources into different types of functionality, such as, operating system, file explorer, application and augmented reality functionality. In operation, an indication to perform an operation with a 3D object is received. One or more 3D resource controls, associated with the operation, are accessed. The 3D resource control is a defined set of instructions on how to integrate 3D resources with 3D objects for generating 3D-based graphical interfaces associated with application features and operating system features. An input based on one or more control elements of the one or more 3D resource controls is received. The input includes the one or more control elements that operate to generate a 3D-based graphical interface for the operation. Based on receiving the input, the operation is executed with the 3D object and the 3D-based graphical interface.
    Type: Application
    Filed: May 16, 2017
    Publication date: April 26, 2018
    Inventors: Enrico William GULD, Jason A. CARTER, Heather Joanne ALEKSON, Andrew Jackson KLEIN, David J.W. SEYMOUR, Kathleen P. MULCAHY, Charla M. PEREIRA, Evan Lewis JONES, William Axel OLSEN, Adam Roy MITCHELL, Daniel Lee OSBORN, Zachary D. WIESNOSKI, Struan Andrew ROBERTSON, Michael Edward HARNISCH, William Robert SCHNURR, Helen Joan Hem Lam, Darren Alexander BENNETT, Kin Hang CHU