Patents by Inventor John Ansell
John Ansell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12008159Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.Type: GrantFiled: February 23, 2023Date of Patent: June 11, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Jatin Sharma, Jonathan T. Campbell, Jay C. Beavers, Peter John Ansell
-
Patent number: 11998335Abstract: Systems and methods are provided for collecting eye-gaze data for training an eye-gaze prediction model. The collecting includes selecting a scan path that passes through a series of regions of a grid on a screen of a computing device, moving a symbol as an eye-gaze target along the scan path, and receiving facial images at eye-gaze points. The eye-gaze points are uniformly distributed within the respective regions. Areas of the regions that are adjacent to edges and corners of the screen are smaller than other regions. The difference in areas shifts centers of the regions toward the edges, density of data closer to the edges. The scan path passes through locations in proximity to the edges and corners of the screen for capturing more eye-gaze points in the proximity. The methods interactively enhance variations of facial images by displaying instructions to the user to make specific actions associated with the face.Type: GrantFiled: April 19, 2021Date of Patent: June 4, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Jatin Sharma, Jonathan T. Campbell, Jay C. Beavers, Peter John Ansell
-
Publication number: 20240152205Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.Type: ApplicationFiled: January 17, 2024Publication date: May 9, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Narasimhan RAGHUNATH, Austin B. HODGES, Fei SU, Akhilesh KAZA, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
-
Patent number: 11934531Abstract: An apparatus includes a memory and a processor. The memory stores descriptions of known vulnerabilities and information generated by a monitoring subsystem. Each description of a known vulnerability identifies software components that are associated with the known vulnerability. The monitoring subsystem monitors software programs that are installed within a computer system. The information includes descriptions of issues that are associated with the software programs. The processor generates a set of mappings, based on a comparison between the text describing the known software vulnerabilities and the text describing the issues. Each mapping associates a software program that is associated with an issue with a known software vulnerability. The processor also uses a machine learning algorithm to predict that a given software program is associated with a particular software vulnerability.Type: GrantFiled: February 25, 2021Date of Patent: March 19, 2024Assignee: Bank of America CorporationInventors: Benjamin John Ansell, Yuvraj Singh, Min Cao, Ra Uf Ridzuan Bin Ma Arof, Hemant Meenanath Patil, Pallavi Yerra, Kaushik Mitra Chowdhury
-
Patent number: 11907419Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.Type: GrantFiled: June 29, 2021Date of Patent: February 20, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Narasimhan Raghunath, Austin B. Hodges, Fei Su, Akhilesh Kaza, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
-
Patent number: 11880545Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).Type: GrantFiled: June 24, 2021Date of Patent: January 23, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
-
Publication number: 20230195224Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.Type: ApplicationFiled: February 23, 2023Publication date: June 22, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Jatin SHARMA, Jonathan T. CAMPBELL, Jay C. BEAVERS, Peter John ANSELL
-
Patent number: 11619993Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.Type: GrantFiled: April 19, 2021Date of Patent: April 4, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Jatin Sharma, Jonathan T. Campbell, Jay C. Beavers, Peter John Ansell
-
Publication number: 20220334637Abstract: Systems and methods are provided for predicting an eye gaze location of an operator of a computing device. In particular, the method generates an image grid that includes regions of interest based on a facial image. The facial image is based on a received image frame of a video stream that captures the operator using the computing device. The image grid further includes a region that indicate rotation information of the face. The method further uses a combination of trained neural networks to extract features of the regions of interest in the image grid and predict the eye gaze location on the screen of the computing device. The trained set of neural networks includes a convolutional neural network. The method optionally generate head pose pitch, roll, and yaw information to improve accuracy of predicting the location of an eye gaze.Type: ApplicationFiled: April 19, 2021Publication date: October 20, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Jatin SHARMA, Jonathan T. CAMPBELL, Jay C. BEAVERS, Peter John ANSELL
-
Publication number: 20220330863Abstract: Systems and methods are provided for collecting eye-gaze data for training an eye-gaze prediction model. The collecting includes selecting a scan path that passes through a series of regions of a grid on a screen of a computing device, moving a symbol as an eye-gaze target along the scan path, and receiving facial images at eye-gaze points. The eye-gaze points are uniformly distributed within the respective regions. Areas of the regions that are adjacent to edges and corners of the screen are smaller than other regions. The difference in areas shifts centers of the regions toward the edges, density of data closer to the edges. The scan path passes through locations in proximity to the edges and corners of the screen for capturing more eye-gaze points in the proximity. The methods interactively enhance variations of facial images by displaying instructions to the user to make specific actions associated with the face.Type: ApplicationFiled: April 19, 2021Publication date: October 20, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Jatin SHARMA, Jonathan T. CAMPBELL, Jay C. BEAVERS, Peter John ANSELL
-
Publication number: 20220269791Abstract: An apparatus includes a memory and a processor. The memory stores descriptions of known vulnerabilities and information generated by a monitoring subsystem. Each description of a known vulnerability identifies software components that are associated with the known vulnerability. The monitoring subsystem monitors software programs that are installed within a computer system. The information includes descriptions of issues that are associated with the software programs. The processor generates a set of mappings, based on a comparison between the text describing the known software vulnerabilities and the text describing the issues. Each mapping associates a software program that is associated with an issue with a known software vulnerability. The processor also uses a machine learning algorithm to predict that a given software program is associated with a particular software vulnerability.Type: ApplicationFiled: February 25, 2021Publication date: August 25, 2022Inventors: Benjamin John Ansell, Yuvraj Singh, Min Cao, Ra Uf Ridzuan Bin Ma Arof, Hemant Meenanath Patil, Pallavi Yerra, Kaushik Mitra Chowdhury
-
Publication number: 20210325962Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.Type: ApplicationFiled: June 29, 2021Publication date: October 21, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Narasimhan RAGHUNATH, Austin B. HODGES, Fei SU, Akhilesh KAZA, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
-
Publication number: 20210318794Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).Type: ApplicationFiled: June 24, 2021Publication date: October 14, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
-
Patent number: 11079899Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).Type: GrantFiled: December 13, 2017Date of Patent: August 3, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Dmytro Rudchenko, Eric N. Badger, Akhilesh Kaza, Jacob Daniel Cohen, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
-
Patent number: 11073904Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.Type: GrantFiled: December 13, 2017Date of Patent: July 27, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Narasimhan Raghunath, Austin B. Hodges, Fei Su, Akhilesh Kaza, Peter John Ansell, Jonathan T. Campbell, Harish S. Kulkarni
-
Patent number: 10496162Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.Type: GrantFiled: July 26, 2017Date of Patent: December 3, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M Paradiso, Eric N Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
-
Publication number: 20190033965Abstract: Systems and methods disclosed herein are related to an intelligent UI element selection system using eye-gaze technology. In some example aspects, a UI element selection zone may be determined. The selection zone may be defined as an area surrounding a boundary of the UI element. Gaze input may be received and the gaze input may be compared with the selection zone to determine an intent of the user. The gaze input may comprise one or more gaze locations. Each gaze location may be assigned a value according to its proximity to the UI element and/or its relation to the UI element's selection zone. Each UI element may be assigned a threshold. If the aggregated value of gaze input is equal to or greater than the threshold for the UI element, then the UI element may be selected.Type: ApplicationFiled: December 13, 2017Publication date: January 31, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Narasimhan RAGHUNATH, Austin B. HODGES, Fei SU, Akhilesh KAZA, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
-
Publication number: 20190033964Abstract: The systems and methods described herein assist persons with the use of computers based on eye gaze, and allow such persons to control such computing systems using various eye trackers. The systems and methods described herein use eye trackers to control cursor (or some other indicator) positioning on an operating system using the gaze location reported by the eye tracker. The systems and methods described herein utilize an interaction model that allows control of a computer using eye gaze and dwell. The data from eye trackers provides a gaze location on the screen. The systems and methods described herein control a graphical user interface that is part of an operating system relative to cursor positioning and associated actions such as Left-Click, Right-Click, Double-Click, and the like. The interaction model presents appropriate user interfaces to navigate the user through applications on the computing system.Type: ApplicationFiled: July 26, 2017Publication date: January 31, 2019Inventors: Harish Sripad Kulkarni, Dwayne Lamb, Ann M. Paradiso, Eric N. Badger, Jonathan Thomas Campbell, Peter John Ansell, Jacob Daniel Cohen
-
Publication number: 20190034057Abstract: Systems and methods disclosed herein relate to assigning dynamic eye-gaze dwell-times. Dynamic dwell-times may be tailored to the individual user. For example, a dynamic dwell-time system may be configured to receive data from the user, such as the duration of time the user takes to execute certain actions within applications (e.g., read a word suggestion before actually selecting it). The dynamic dwell-time system may also prevent users from making unintended selections by providing different dwell times for different buttons. Specifically, on a user interface, longer dwell times may be established for the critical keys (e.g., “close” program key, “send” key, word suggestions, and the like) and shorter dwell times may be established for the less critical keys (e.g., individual character keys on a virtual keyboard, spacebar, backspace, and the like).Type: ApplicationFiled: December 13, 2017Publication date: January 31, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Dmytro RUDCHENKO, Eric N. BADGER, Akhilesh KAZA, Jacob Daniel COHEN, Peter John ANSELL, Jonathan T. CAMPBELL, Harish S. KULKARNI
-
Patent number: 9991970Abstract: Transferring data via audio link is described. In an example a short sequence of data can be transferred between two devices by encoding the sequence of data as an audio sequence. For example, the audio sequence may be a sequence of tones which vary in dependence on the encoded data. The sequence of data may be encoded by a first device and transmitted using a loudspeaker associated with the first device. At least one mobile communications device can be used to capture the audio sequence, for example using a microphone, and to decode the sequence, retrieving the data encoded therein. In some examples the encoded data may comprise a shortened URL or other information which can be used to control one or more aspects of the capture device.Type: GrantFiled: March 16, 2015Date of Patent: June 5, 2018Assignee: Microsoft Technology Licensing, LLCInventor: Peter John Ansell