Patents by Inventor Mudit Agrawal
Mudit Agrawal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210406578Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.Type: ApplicationFiled: September 14, 2021Publication date: December 30, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Paul Hinckley, Wolf Kienzle, Mudit Agrawal
-
Patent number: 10657401Abstract: Apparatus and methods of biometric object spoof detection are configured to receive at least first and second images, including a biometric object, respectively captured at a first and second time in response to a first and second incident light. The first and second incident light is emitted from at least one light source at substantially a same wavelength, but with different sets of illumination characteristics. Further, the apparatus and method are configured to respectively determine a first set and a corresponding second set of reflection intensity features respectively based on at least a part of the first and second images, and to determine a set of reflection intensity difference features based on an intensity difference therebetween. Additionally, the apparatus and methods are configured to classify the biometric object as being a fake object or a real object based on at least one of the reflection intensity difference features.Type: GrantFiled: February 21, 2018Date of Patent: May 19, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Mudit Agrawal, Akihiro Tsukada
-
Patent number: 10380418Abstract: A first image set that includes a plurality of 2D images of an eye of a user is collected. Two or more sets of corresponding pixels are generated from the plurality of 2D images. One or more 3D features of an iris of the user is extracted based on repeatable differences in reflectance among each set of corresponding pixels. A test set of 3D features for a submitted eye is generated based on a second image set, the second image set including a plurality of 2D images of the submitted eye. Based on a comparison of the one or more extracted 3D features and the test set of 3D features, an indication is made as to whether the second image set is representative of the user's eye.Type: GrantFiled: June 19, 2017Date of Patent: August 13, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mudit Agrawal, Akihiro Tsukada
-
Publication number: 20180365490Abstract: A first image set that includes a plurality of 2D images of an eye of a user is collected. Two or more sets of corresponding pixels are generated from the plurality of 2D images. One or more 3D features of an iris of the user is extracted based on repeatable differences in reflectance among each set of corresponding pixels. A test set of 3D features for a submitted eye is generated based on a second image set, the second image set including a plurality of 2D images of the submitted eye. Based on a comparison of the one or more extracted 3D features and the test set of 3D features, an indication is made as to whether the second image set is representative of the user's eye.Type: ApplicationFiled: June 19, 2017Publication date: December 20, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Mudit AGRAWAL, Akihiro TSUKADA
-
Publication number: 20180349721Abstract: Apparatus and methods of biometric object spoof detection are configured to receive at least first and second images, including a biometric object, respectively captured at a first and second time in response to a first and second incident light. The first and second incident light is emitted from at least one light source at substantially a same wavelength, but with different sets of illumination characteristics. Further, the apparatus and method are configured to respectively determine a first set and a corresponding second set of reflection intensity features respectively based on at least a part of the first and second images, and to determine a set of reflection intensity difference features based on an intensity difference therebetween. Additionally, the apparatus and methods are configured to classify the biometric object as being a fake object or a real object based on at least one of the reflection intensity difference features.Type: ApplicationFiled: February 21, 2018Publication date: December 6, 2018Inventors: Mudit AGRAWAL, Akihiro TSUKADA
-
Patent number: 10140011Abstract: User inputs can indicate an intent of a user to target a location on a display. In order to determine a targeted point based on a user input, a computing device can receive an indication of at least one point, an indication of a width, and an indication of a height. The computing device can estimate a portion of the display based on the indication of the at least one point, the indication of the width, and the indication of the height. The computing device can also determine the targeted point based on a location of the at least one point and based on a location of a portion of one or more objects within the estimated portion of the display.Type: GrantFiled: August 12, 2011Date of Patent: November 27, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Jen Anderson, Eric Christian Brown, Jennifer Teed, Goran Predovic, Bruce Edward James, Fei Su, Maybelle Lippert, Mudit Agrawal
-
Patent number: 10049272Abstract: Examples are disclosed herein that relate to user authentication. One example provides a biometric identification system comprising an iris illuminator, an image sensor configured to capture light reflected from irises of a user as a result of those irises being illuminated by the iris illuminator, a drive circuit configured to drive the iris illuminator in a first mode and a second mode that each cause the irises to be illuminated differently, the first and second modes thereby yielding a first mode output at the image sensor and a second mode output at the image sensor, respectively, and a processor configured to process at least one of the first mode output and the second mode output and, in response to such processing, select one of the first mode and the second mode for use in performing an iris authentication on the user.Type: GrantFiled: September 24, 2015Date of Patent: August 14, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mudit Agrawal, Karlton David Powell, Christopher Maurice Mei
-
Publication number: 20180129897Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.Type: ApplicationFiled: January 8, 2018Publication date: May 10, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Paul Hinckley, Wolf Kienzle, Mudit Agrawal
-
Patent number: 9916502Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.Type: GrantFiled: August 22, 2016Date of Patent: March 13, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
-
Patent number: 9881224Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.Type: GrantFiled: December 17, 2013Date of Patent: January 30, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Wolf Kienzle, Kenneth Paul Hinckley, Mudit Agrawal
-
Publication number: 20170091548Abstract: Examples are disclosed herein that relate to user authentication. One example provides a biometric identification system comprising an iris illuminator, an image sensor configured to capture light reflected from irises of a user as a result of those irises being illuminated by the iris illuminator, a drive circuit configured to drive the iris illuminator in a first mode and a second mode that each cause the irises to be illuminated differently, the first and second modes thereby yielding a first mode output at the image sensor and a second mode output at the image sensor, respectively, and a processor configured to process at least one of the first mode output and the second mode output and, in response to such processing, select one of the first mode and the second mode for use in performing an iris authentication on the user.Type: ApplicationFiled: September 24, 2015Publication date: March 30, 2017Inventors: Mudit Agrawal, Karlton David Powell, Christopher Maurice Mei
-
Publication number: 20160358009Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.Type: ApplicationFiled: August 22, 2016Publication date: December 8, 2016Applicant: Microsoft Technology Licensing, LLCInventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
-
Patent number: 9454699Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.Type: GrantFiled: April 29, 2014Date of Patent: September 27, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
-
Patent number: 9405379Abstract: Techniques for identifying inadvertent user input, such as inadvertent touch contact or air input, are described. The techniques may include classifying a touch contact or air input as intentional or unintentional based on contextual information related to the touch contact, the air input, or a device via which the touch contact or air input was received. In some examples, the contextual information may indicate how a user is interacting with the device, such as a position of the user's hand, a location of the touch contact on a touch surface, a path of user's touch trajectory, an application with which the user may be interacting, keyboard input history of the user, and so on. When the user input is classified as unintentional, the techniques may refrain from performing an action that is generally triggered by the user input.Type: GrantFiled: June 13, 2013Date of Patent: August 2, 2016Assignee: Microsoft Technology Licensing, LLCInventors: David Abzarian, Ross W Nichols, Fei Su, Mudit Agrawal
-
Patent number: 9262076Abstract: User experience of the performance of a computing device is improved through an operating system that processes inputs from a soft keyboard to provide information that can be used to accurately determine keys a user intended to strike while typing. For each detected tap, the operating system provides a probability that one or more keys were the intended target for the user. These probabilities may be computed from probability distribution functions that are dynamically determined based on user and/or system factors, such as typing rate and keyboard style or layout. Other components may use the probabilities to select a key corresponding to a detected keyboard tap as representing the intended user input. The selection may be made based on the probabilities alone or in combination with contextual factors that yield an overall probability that a detected tap arose from a user targeting a specific key.Type: GrantFiled: September 12, 2011Date of Patent: February 16, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Reed L. Townsend, Mudit Agrawal, Andrey Borissov Batchvarov, Fei Su
-
Publication number: 20150310253Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.Type: ApplicationFiled: April 29, 2014Publication date: October 29, 2015Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
-
Publication number: 20150169975Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.Type: ApplicationFiled: December 17, 2013Publication date: June 18, 2015Applicant: Microsoft CorporationInventors: Wolf Kienzle, Kenneth Paul Hinckley, Mudit Agrawal
-
Publication number: 20140368436Abstract: Techniques for identifying inadvertent user input, such as inadvertent touch contact or air input, are described. The techniques may include classifying a touch contact or air input as intentional or unintentional based on contextual information related to the touch contact, the air input, or a device via which the touch contact or air input was received. In some examples, the contextual information may indicate how a user is interacting with the device, such as a position of the user's hand, a location of the touch contact on a touch surface, a path of user's touch trajectory, an application with which the user may be interacting, keyboard input history of the user, and so on. When the user input is classified as unintentional, the techniques may refrain from performing an action that is generally triggered by the user input.Type: ApplicationFiled: June 13, 2013Publication date: December 18, 2014Inventors: David Abzarian, Ross W. Nichols, Fei Su, Mudit Agrawal
-
Publication number: 20130067382Abstract: User experience of the performance of a computing device is improved through an operating system that processes inputs from a soft keyboard to provide information that can be used to accurately determine keys a user intended to strike while typing. For each detected tap, the operating system provides a probability that one or more keys were the intended target for the user. These probabilities may be computed from probability distribution functions that are dynamically determined based on user and/or system factors, such as typing rate and keyboard style or layout. Other components may use the probabilities to select a key corresponding to a detected keyboard tap as representing the intended user input. The selection may be made based on the probabilities alone or in combination with contextual factors that yield an overall probability that a detected tap arose from a user targeting a specific key.Type: ApplicationFiled: September 12, 2011Publication date: March 14, 2013Applicant: Microsoft CorporationInventors: Reed L. Townsend, Mudit Agrawal, Andrey Borissov Batchvarov, Fei Su
-
Publication number: 20130038540Abstract: User inputs can indicate an intent of a user to target a location on a display. In order to determine a targeted point based on a user input, a computing device can receive an indication of at least one point, an indication of a width, and an indication of a height. The computing device can estimate a portion of the display based on the indication of the at least one point, the indication of the width, and the indication of the height. The computing device can also determine the targeted point based on a location of the at least one point and based on a location of a portion of one or more objects within the estimated portion of the display.Type: ApplicationFiled: August 12, 2011Publication date: February 14, 2013Applicant: Microsoft CorporationInventors: Jen Anderson, Eric Christian Brown, Jennifer Teed, Goran Predovic, Bruce Edward James, Fei Su, Maybelle Lippert, Mudit Agrawal