Patents by Inventor Luke Cartwright

Luke Cartwright has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11183185
    Abstract: A method performed by a computing system for directing a voice command to a function associated with a visual target includes receiving a set of time-variable sensor-based data streams, including an audio data stream and a targeting data stream. The targeting data stream is stored in a buffer as buffered targeting data. Presence of a spoken utterance is identified within the audio data stream and is associated with a temporal identifier corresponding in time to the set of sensor-based data streams. A voice command corresponding to the spoken utterance is identified. A visual targeting vector within the buffered targeting data and a visual target of that visual targeting vector is identified at a time corresponding to the temporal identifier. The voice command is directed to a function associated with the visual target to generate an output.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: November 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Alexander James Thaman, Alton Hau Kwong Kwok
  • Patent number: 10930275
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Richard William Neal
  • Patent number: 10789952
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: September 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Richard William Neal, Alton Kwok
  • Publication number: 20200219501
    Abstract: A method performed by a computing system for directing a voice command to a function associated with a visual target includes receiving a set of time-variable sensor-based data streams, including an audio data stream and a targeting data stream. The targeting data stream is stored in a buffer as buffered targeting data. Presence of a spoken utterance is identified within the audio data stream and is associated with a temporal identifier corresponding in time to the set of sensor-based data streams. A voice command corresponding to the spoken utterance is identified. A visual targeting vector within the buffered targeting data and a visual target of that visual targeting vector is identified at a time corresponding to the temporal identifier. The voice command is directed to a function associated with the visual target to generate an output.
    Type: Application
    Filed: January 9, 2019
    Publication date: July 9, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Luke CARTWRIGHT, Alexander James THAMAN, Alton Hau Kwong KWOK
  • Publication number: 20200202849
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.
    Type: Application
    Filed: December 20, 2018
    Publication date: June 25, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Luke CARTWRIGHT, Richard William NEAL, Alton KWOK
  • Publication number: 20200193976
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.
    Type: Application
    Filed: December 18, 2018
    Publication date: June 18, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Luke CARTWRIGHT, Richard William NEAL
  • Patent number: 10600246
    Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, positioning a passthrough portal in the virtual environment, fixing a position of the passthrough portal in the virtual environment relative to the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: March 24, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Marcelo Alonso Mejia Cobo, Misbah Uraizee
  • Publication number: 20190385368
    Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, positioning a passthrough portal in the virtual environment, fixing a position of the passthrough portal in the virtual environment relative to the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.
    Type: Application
    Filed: August 9, 2018
    Publication date: December 19, 2019
    Inventors: Luke CARTWRIGHT, Marcelo Alonso MEJIA COBO, Misbah URAIZEE
  • Publication number: 20190385372
    Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, identifying at least one surface in the physical environment, positioning a passthrough portal in the virtual environment, a position of the passthrough portal having a z-distance from the user in the virtual environment that is equal to a z-distance of the at least one surface in the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.
    Type: Application
    Filed: August 9, 2018
    Publication date: December 19, 2019
    Inventors: Luke CARTWRIGHT, Marcelo Alonso MEJIA COBO, Misbah URAIZEE, Nicholas Ferianc KAMUDA
  • Publication number: 20150049085
    Abstract: Methods, systems, and computer-readable media for editing a mesh representing a surface are provided. The method includes receiving a representation of an object. The representation includes the mesh and a plurality of discrete elements comprising one or more boundary elements. The mesh is associated with the one or more boundary elements. The method also includes changing an edited element of the plurality of discrete elements from a boundary element to a non-boundary element or from a non-boundary element to a boundary element. The method also includes locally recalculating a portion the mesh based on the changing.
    Type: Application
    Filed: August 14, 2014
    Publication date: February 19, 2015
    Inventors: Bjarte Dysvik, Luke Cartwright