Patents by Inventor Luke Cartwright
Luke Cartwright has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11183185Abstract: A method performed by a computing system for directing a voice command to a function associated with a visual target includes receiving a set of time-variable sensor-based data streams, including an audio data stream and a targeting data stream. The targeting data stream is stored in a buffer as buffered targeting data. Presence of a spoken utterance is identified within the audio data stream and is associated with a temporal identifier corresponding in time to the set of sensor-based data streams. A voice command corresponding to the spoken utterance is identified. A visual targeting vector within the buffered targeting data and a visual target of that visual targeting vector is identified at a time corresponding to the temporal identifier. The voice command is directed to a function associated with the visual target to generate an output.Type: GrantFiled: January 9, 2019Date of Patent: November 23, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Alexander James Thaman, Alton Hau Kwong Kwok
-
Patent number: 10930275Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.Type: GrantFiled: December 18, 2018Date of Patent: February 23, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Richard William Neal
-
Patent number: 10789952Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.Type: GrantFiled: December 20, 2018Date of Patent: September 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Richard William Neal, Alton Kwok
-
Publication number: 20200219501Abstract: A method performed by a computing system for directing a voice command to a function associated with a visual target includes receiving a set of time-variable sensor-based data streams, including an audio data stream and a targeting data stream. The targeting data stream is stored in a buffer as buffered targeting data. Presence of a spoken utterance is identified within the audio data stream and is associated with a temporal identifier corresponding in time to the set of sensor-based data streams. A voice command corresponding to the spoken utterance is identified. A visual targeting vector within the buffered targeting data and a visual target of that visual targeting vector is identified at a time corresponding to the temporal identifier. The voice command is directed to a function associated with the visual target to generate an output.Type: ApplicationFiled: January 9, 2019Publication date: July 9, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Luke CARTWRIGHT, Alexander James THAMAN, Alton Hau Kwong KWOK
-
Publication number: 20200202849Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.Type: ApplicationFiled: December 20, 2018Publication date: June 25, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Luke CARTWRIGHT, Richard William NEAL, Alton KWOK
-
Publication number: 20200193976Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.Type: ApplicationFiled: December 18, 2018Publication date: June 18, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Luke CARTWRIGHT, Richard William NEAL
-
Patent number: 10600246Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, positioning a passthrough portal in the virtual environment, fixing a position of the passthrough portal in the virtual environment relative to the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.Type: GrantFiled: August 9, 2018Date of Patent: March 24, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Luke Cartwright, Marcelo Alonso Mejia Cobo, Misbah Uraizee
-
Publication number: 20190385368Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, positioning a passthrough portal in the virtual environment, fixing a position of the passthrough portal in the virtual environment relative to the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.Type: ApplicationFiled: August 9, 2018Publication date: December 19, 2019Inventors: Luke CARTWRIGHT, Marcelo Alonso MEJIA COBO, Misbah URAIZEE
-
Publication number: 20190385372Abstract: A method for presenting a physical environment in a virtual environment includes presenting a virtual environment to a user with a near-eye display, imaging a physical environment of the user, identifying at least one surface in the physical environment, positioning a passthrough portal in the virtual environment, a position of the passthrough portal having a z-distance from the user in the virtual environment that is equal to a z-distance of the at least one surface in the physical environment, and presenting a video feed of the physical environment in the passthrough portal in the virtual environment.Type: ApplicationFiled: August 9, 2018Publication date: December 19, 2019Inventors: Luke CARTWRIGHT, Marcelo Alonso MEJIA COBO, Misbah URAIZEE, Nicholas Ferianc KAMUDA
-
Publication number: 20150049085Abstract: Methods, systems, and computer-readable media for editing a mesh representing a surface are provided. The method includes receiving a representation of an object. The representation includes the mesh and a plurality of discrete elements comprising one or more boundary elements. The mesh is associated with the one or more boundary elements. The method also includes changing an edited element of the plurality of discrete elements from a boundary element to a non-boundary element or from a non-boundary element to a boundary element. The method also includes locally recalculating a portion the mesh based on the changing.Type: ApplicationFiled: August 14, 2014Publication date: February 19, 2015Inventors: Bjarte Dysvik, Luke Cartwright