Patents by Inventor Peter John Ansell

Peter John Ansell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11907419
    Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: February 20, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Narasimhan Raghunath, Austin B. Hodges, Fei Su, Akhilesh Kaza, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
  • Patent number: 11880545
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Grant
    Filed: June 24, 2021
    Date of Patent: January 23, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
  • Publication number: 20230195224
    Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.
    Type: Application
    Filed: February 23, 2023
    Publication date: June 22, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jatin SHARMA, Jonathan T. CAMPBELL, Jay C. BEAVERS, Peter John ANSELL
  • Patent number: 11619993
    Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.
    Type: Grant
    Filed: April 19, 2021
    Date of Patent: April 4, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jatin Sharma, Jonathan T. Campbell, Jay C. Beavers, Peter John Ansell
  • Publication number: 20220334637
    Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 20, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jatin SHARMA, Jonathan T. CAMPBELL, Jay C. BEAVERS, Peter John ANSELL
  • Publication number: 20220330863
    Abstract: Systems and methods are provided for collecting eye-gaze data for training an eye-gaze prediction model. The collecting includes selecting a scan path that passes through a series of regions of a grid on a screen of a computing device, moving a symbol as an eye-gaze target along the scan path, and receiving facial images at eye-gaze points. The eye-gaze points are uniformly distributed within the respective regions. Areas of the regions that are adjacent to edges and corners of the screen are smaller than other regions. The difference in areas shifts centers of the regions toward the edges, density of data closer to the edges. The scan path passes through locations in proximity to the edges and corners of the screen for capturing more eye-gaze points in the proximity. The methods interactively enhance variations of facial images by displaying instructions to the user to make specific actions associated with the face.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 20, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Jatin SHARMA, Jonathan T. CAMPBELL, Jay C. BEAVERS, Peter John ANSELL
  • Publication number: 20210325962
    Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.
    Type: Application
    Filed: June 29, 2021
    Publication date: October 21, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Narasimhan RAGHUNATH, Austin B. HODGES, Fei SU, Akhilesh KAZA, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
  • Publication number: 20210318794
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Application
    Filed: June 24, 2021
    Publication date: October 14, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
  • Patent number: 11079899
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: August 3, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
  • Patent number: 11073904
    Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: July 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Narasimhan Raghunath, Austin B. Hodges, Fei Su, Akhilesh Kaza, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
  • Patent number: 10496162
    Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: December 3, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M Paradiso, Eric N Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
  • Publication number: 20190033964
    Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.
    Type: Application
    Filed: July 26, 2017
    Publication date: January 31, 2019
    Inventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M. Paradiso, Eric N. Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
  • Publication number: 20190033965
    Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.
    Type: Application
    Filed: December 13, 2017
    Publication date: January 31, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Narasimhan RAGHUNATH, Austin B. HODGES, Fei SU, Akhilesh KAZA, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
  • Publication number: 20190034057
    Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).
    Type: Application
    Filed: December 13, 2017
    Publication date: January 31, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
  • Patent number: 9991970
    Abstract: Transferring data via audio link is described. In an example a short sequence of data can be transferred between two devices by encoding the sequence of data as an audio sequence. For example, the audio sequence may be a sequence of tones which vary in dependence on the encoded data. The sequence of data may be encoded by a first device and transmitted using a loudspeaker associated with the first device. At least one mobile communications device can be used to capture the audio sequence, for example using a microphone, and to decode the sequence, retrieving the data encoded therein. In some examples the encoded data may comprise a shortened URL or other information which can be used to control one or more aspects of the capture device.
    Type: Grant
    Filed: March 16, 2015
    Date of Patent: June 5, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Peter John Ansell
  • Patent number: 9870141
    Abstract: Gesture recognition is described. In one example, gestures performed by a user of an input device having a touch-sensitive portion are detected using a definition of a number of regions corresponding to zones on the touch-sensitive portion, each region being associated with a distinct set of gestures. Data describing movement of the user's digits on the touch-sensitive portion is received, and an associated region for the data determined. The data is compared to the associated region's set of gestures, and a gesture applicable to the data selected. A command associated with the selected gesture can then be executed. In an example, comparing the data to the set of gestures comprises positioning a threshold for each gesture relative to the start of the digit's movement. The digit's location is compared to each threshold to determine whether a threshold has been crossed, and, if so, selecting the gesture associated with that threshold.
    Type: Grant
    Filed: November 19, 2010
    Date of Patent: January 16, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Peter John Ansell, Shahram Izadi
  • Publication number: 20170293402
    Abstract: The systems and techniques described herein implement an improved gaze-based on-screen keyboard that provide dynamically variable dwell times to increase throughput and reduce errors. Utilizing a language model, the probability that each key of the on-screen keyboard will be the subsequential key can be determined, and based at least in part on this determined probability, a dwell time can be assigned to each key. When used as an iterative process, a minimum dwell time may be gradually reduced as confidence in the subsequential key increases to provide a cascading minimum dwell time.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 12, 2017
    Inventors: Meredith June Morris, Shane Frandon Williams, Mira Eileen Shah, Ann Paradiso, Harish S. Kulkarni, Martez Mott, Jay Curtis Beavers, Jonathan Thomas Campbell, Peter John Ansell
  • Publication number: 20150188643
    Abstract: Transferring data via audio link is described. In an example a short sequence of data can be transferred between two devices by encoding the sequence of data as an audio sequence. For example, the audio sequence may be a sequence of tones which vary in dependence on the encoded data. The sequence of data may be encoded by a first device and transmitted using a loudspeaker associated with the first device. At least one mobile communications device can be used to capture the audio sequence, for example using a microphone, and to decode the sequence, retrieving the data encoded therein. In some examples the encoded data may comprise a shortened URL or other information which can be used to control one or more aspects of the capture device.
    Type: Application
    Filed: March 16, 2015
    Publication date: July 2, 2015
    Inventor: Peter John Ansell
  • Patent number: 8996370
    Abstract: Transferring data via audio link is described. In an example a short sequence of data can be transferred between two devices by encoding the sequence of data as an audio sequence. For example, the audio sequence may be a sequence of tones which vary in dependence on the encoded data. The sequence of data may be encoded by a first device and transmitted using a loudspeaker associated with the first device. At least one mobile communications device can be used to capture the audio sequence, for example using a microphone, and to decode the sequence, retrieving the data encoded therein. In some examples the encoded data may comprise a shortened URL or other information which can be used to control one or more aspects of the capture device.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: March 31, 2015
    Assignee: Microsoft Corporation
    Inventor: Peter John Ansell
  • Publication number: 20140208274
    Abstract: Methods and system for controlling a computing-based device using both input received from a traditional input device (e.g. keyboard) and hand gestures made on or near a reference object (e.g. keyboard). In some examples, the hand gestures may comprise one or more hand touch gestures and/or one or more hand air gestures.
    Type: Application
    Filed: January 18, 2013
    Publication date: July 24, 2014
    Applicant: MICROSOFT CORPORATION
    Inventors: Samuel Gavin Smyth, Peter John Ansell, Christopher Jozef O'Prey, Mitchel Alan Goldberg, Jamie Daniel Joseph Shotton, Toby Sharp, Shahram Izadi, Abigail Jane Sellen, Richard Malcolm Banks, Kenton O'Hara, Richard Harry Robert Harper, Eric John Greveson, David Alexander Butler, Stephen E Hodges