Patents by Inventor Richard William NEAL

Richard William NEAL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10930275
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Richard William Neal
  • Patent number: 10789952
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: September 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Luke Cartwright, Richard William Neal, Alton Kwok
  • Publication number: 20200202849
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive, from a user, a voice command, a first auxiliary input from a first sensor, and a second auxiliary input from a second sensor. The processor is configured to, for each of a plurality of objects in the user's field of view in an environment, determine a first set of probability factors with respect to the first auxiliary input and a second set of probability factors with respect to the second auxiliary input. Each probability factor in the first and second sets indicates a likelihood that respective auxiliary inputs are directed to one of the plurality of objects. The processor is configured to determine a target object based upon the probability factors and execute the command on the target object.
    Type: Application
    Filed: December 20, 2018
    Publication date: June 25, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Luke CARTWRIGHT, Richard William NEAL, Alton KWOK
  • Publication number: 20200193976
    Abstract: A computing system is provided. The computing system includes a processor of a display device configured to execute one or more programs. The processor is configured to receive a command from a user by way of natural language input. The processor is configured to identity a set of candidate objects within or adjacent a user's field of view having associated spatialized regions on which the command can be executed, the set of candidate objects identified at least partially by using a machine learning model. The processor is configured to use visual or audio indicators associated with the candidate objects and query the user for disambiguation input. The processor is configured to receive the disambiguation input from the user that selects a target object, executing the command on the target object. The processor is configured to train the machine learning model using the disambiguation input and data about the spatialized regions.
    Type: Application
    Filed: December 18, 2018
    Publication date: June 18, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Luke CARTWRIGHT, Richard William NEAL