Patents by Inventor Mudit Agrawal

Mudit Agrawal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210406578
    Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.
    Type: Application
    Filed: September 14, 2021
    Publication date: December 30, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Paul Hinckley, Wolf Kienzle, Mudit Agrawal
  • Patent number: 10657401
    Abstract: Apparatus and methods of biometric object spoof detection are configured to receive at least first and second images, including a biometric object, respectively captured at a first and second time in response to a first and second incident light. The first and second incident light is emitted from at least one light source at substantially a same wavelength, but with different sets of illumination characteristics. Further, the apparatus and method are configured to respectively determine a first set and a corresponding second set of reflection intensity features respectively based on at least a part of the first and second images, and to determine a set of reflection intensity difference features based on an intensity difference therebetween. Additionally, the apparatus and methods are configured to classify the biometric object as being a fake object or a real object based on at least one of the reflection intensity difference features.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: May 19, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mudit Agrawal, Akihiro Tsukada
  • Patent number: 10380418
    Abstract: A first image set that includes a plurality of 2D images of an eye of a user is collected. Two or more sets of corresponding pixels are generated from the plurality of 2D images. One or more 3D features of an iris of the user is extracted based on repeatable differences in reflectance among each set of corresponding pixels. A test set of 3D features for a submitted eye is generated based on a second image set, the second image set including a plurality of 2D images of the submitted eye. Based on a comparison of the one or more extracted 3D features and the test set of 3D features, an indication is made as to whether the second image set is representative of the user's eye.
    Type: Grant
    Filed: June 19, 2017
    Date of Patent: August 13, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Akihiro Tsukada
  • Publication number: 20180365490
    Abstract: A first image set that includes a plurality of 2D images of an eye of a user is collected. Two or more sets of corresponding pixels are generated from the plurality of 2D images. One or more 3D features of an iris of the user is extracted based on repeatable differences in reflectance among each set of corresponding pixels. A test set of 3D features for a submitted eye is generated based on a second image set, the second image set including a plurality of 2D images of the submitted eye. Based on a comparison of the one or more extracted 3D features and the test set of 3D features, an indication is made as to whether the second image set is representative of the user's eye.
    Type: Application
    Filed: June 19, 2017
    Publication date: December 20, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mudit AGRAWAL, Akihiro TSUKADA
  • Publication number: 20180349721
    Abstract: Apparatus and methods of biometric object spoof detection are configured to receive at least first and second images, including a biometric object, respectively captured at a first and second time in response to a first and second incident light. The first and second incident light is emitted from at least one light source at substantially a same wavelength, but with different sets of illumination characteristics. Further, the apparatus and method are configured to respectively determine a first set and a corresponding second set of reflection intensity features respectively based on at least a part of the first and second images, and to determine a set of reflection intensity difference features based on an intensity difference therebetween. Additionally, the apparatus and methods are configured to classify the biometric object as being a fake object or a real object based on at least one of the reflection intensity difference features.
    Type: Application
    Filed: February 21, 2018
    Publication date: December 6, 2018
    Inventors: Mudit AGRAWAL, Akihiro TSUKADA
  • Patent number: 10140011
    Abstract: User inputs can indicate an intent of a user to target a location on a display. In order to determine a targeted point based on a user input, a computing device can receive an indication of at least one point, an indication of a width, and an indication of a height. The computing device can estimate a portion of the display based on the indication of the at least one point, the indication of the width, and the indication of the height. The computing device can also determine the targeted point based on a location of the at least one point and based on a location of a portion of one or more objects within the estimated portion of the display.
    Type: Grant
    Filed: August 12, 2011
    Date of Patent: November 27, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jen Anderson, Eric Christian Brown, Jennifer Teed, Goran Predovic, Bruce Edward James, Fei Su, Maybelle Lippert, Mudit Agrawal
  • Patent number: 10049272
    Abstract: Examples are disclosed herein that relate to user authentication. One example provides a biometric identification system comprising an iris illuminator, an image sensor configured to capture light reflected from irises of a user as a result of those irises being illuminated by the iris illuminator, a drive circuit configured to drive the iris illuminator in a first mode and a second mode that each cause the irises to be illuminated differently, the first and second modes thereby yielding a first mode output at the image sensor and a second mode output at the image sensor, respectively, and a processor configured to process at least one of the first mode output and the second mode output and, in response to such processing, select one of the first mode and the second mode for use in performing an iris authentication on the user.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: August 14, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Karlton David Powell, Christopher Maurice Mei
  • Publication number: 20180129897
    Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.
    Type: Application
    Filed: January 8, 2018
    Publication date: May 10, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Kenneth Paul Hinckley, Wolf Kienzle, Mudit Agrawal
  • Patent number: 9916502
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: March 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9881224
    Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.
    Type: Grant
    Filed: December 17, 2013
    Date of Patent: January 30, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Wolf Kienzle, Kenneth Paul Hinckley, Mudit Agrawal
  • Publication number: 20170091548
    Abstract: Examples are disclosed herein that relate to user authentication. One example provides a biometric identification system comprising an iris illuminator, an image sensor configured to capture light reflected from irises of a user as a result of those irises being illuminated by the iris illuminator, a drive circuit configured to drive the iris illuminator in a first mode and a second mode that each cause the irises to be illuminated differently, the first and second modes thereby yielding a first mode output at the image sensor and a second mode output at the image sensor, respectively, and a processor configured to process at least one of the first mode output and the second mode output and, in response to such processing, select one of the first mode and the second mode for use in performing an iris authentication on the user.
    Type: Application
    Filed: September 24, 2015
    Publication date: March 30, 2017
    Inventors: Mudit Agrawal, Karlton David Powell, Christopher Maurice Mei
  • Publication number: 20160358009
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Application
    Filed: August 22, 2016
    Publication date: December 8, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9454699
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: September 27, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9405379
    Abstract: Techniques for identifying inadvertent user input, such as inadvertent touch contact or air input, are described. The techniques may include classifying a touch contact or air input as intentional or unintentional based on contextual information related to the touch contact, the air input, or a device via which the touch contact or air input was received. In some examples, the contextual information may indicate how a user is interacting with the device, such as a position of the user's hand, a location of the touch contact on a touch surface, a path of user's touch trajectory, an application with which the user may be interacting, keyboard input history of the user, and so on. When the user input is classified as unintentional, the techniques may refrain from performing an action that is generally triggered by the user input.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: August 2, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Abzarian, Ross W Nichols, Fei Su, Mudit Agrawal
  • Patent number: 9262076
    Abstract: User experience of the performance of a computing device is improved through an operating system that processes inputs from a soft keyboard to provide information that can be used to accurately determine keys a user intended to strike while typing. For each detected tap, the operating system provides a probability that one or more keys were the intended target for the user. These probabilities may be computed from probability distribution functions that are dynamically determined based on user and/or system factors, such as typing rate and keyboard style or layout. Other components may use the probabilities to select a key corresponding to a detected keyboard tap as representing the intended user input. The selection may be made based on the probabilities alone or in combination with contextual factors that yield an overall probability that a detected tap arose from a user targeting a specific key.
    Type: Grant
    Filed: September 12, 2011
    Date of Patent: February 16, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Reed L. Townsend, Mudit Agrawal, Andrey Borissov Batchvarov, Fei Su
  • Publication number: 20150310253
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Application
    Filed: April 29, 2014
    Publication date: October 29, 2015
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Publication number: 20150169975
    Abstract: A “Stroke Untangler” composes handwritten messages from handwritten strokes representing overlapping letters or partial letter segments are drawn on a touchscreen device or touch-sensitive surface. These overlapping strokes are automatically untangled and then segmented and combined into one or more letters, words, or phrases. Advantageously, segmentation and composition is performed without requiring user gestures, timeouts, or other inputs to delimit characters within words, and without using handwriting recognition-based techniques to guide untangling and composing of the overlapping strokes to form characters. In other words, the user draws multiple overlapping strokes. Those strokes are then automatically segmented and combined into one or more corresponding characters. Text recognition of the resulting characters is then performed. Further, the segmentation and combination is performed in real-time, thereby enabling real-time rendering of the resulting characters in a user interface window.
    Type: Application
    Filed: December 17, 2013
    Publication date: June 18, 2015
    Applicant: Microsoft Corporation
    Inventors: Wolf Kienzle, Kenneth Paul Hinckley, Mudit Agrawal
  • Publication number: 20140368436
    Abstract: Techniques for identifying inadvertent user input, such as inadvertent touch contact or air input, are described. The techniques may include classifying a touch contact or air input as intentional or unintentional based on contextual information related to the touch contact, the air input, or a device via which the touch contact or air input was received. In some examples, the contextual information may indicate how a user is interacting with the device, such as a position of the user's hand, a location of the touch contact on a touch surface, a path of user's touch trajectory, an application with which the user may be interacting, keyboard input history of the user, and so on. When the user input is classified as unintentional, the techniques may refrain from performing an action that is generally triggered by the user input.
    Type: Application
    Filed: June 13, 2013
    Publication date: December 18, 2014
    Inventors: David Abzarian, Ross W. Nichols, Fei Su, Mudit Agrawal
  • Publication number: 20130067382
    Abstract: User experience of the performance of a computing device is improved through an operating system that processes inputs from a soft keyboard to provide information that can be used to accurately determine keys a user intended to strike while typing. For each detected tap, the operating system provides a probability that one or more keys were the intended target for the user. These probabilities may be computed from probability distribution functions that are dynamically determined based on user and/or system factors, such as typing rate and keyboard style or layout. Other components may use the probabilities to select a key corresponding to a detected keyboard tap as representing the intended user input. The selection may be made based on the probabilities alone or in combination with contextual factors that yield an overall probability that a detected tap arose from a user targeting a specific key.
    Type: Application
    Filed: September 12, 2011
    Publication date: March 14, 2013
    Applicant: Microsoft Corporation
    Inventors: Reed L. Townsend, Mudit Agrawal, Andrey Borissov Batchvarov, Fei Su
  • Publication number: 20130038540
    Abstract: User inputs can indicate an intent of a user to target a location on a display. In order to determine a targeted point based on a user input, a computing device can receive an indication of at least one point, an indication of a width, and an indication of a height. The computing device can estimate a portion of the display based on the indication of the at least one point, the indication of the width, and the indication of the height. The computing device can also determine the targeted point based on a location of the at least one point and based on a location of a portion of one or more objects within the estimated portion of the display.
    Type: Application
    Filed: August 12, 2011
    Publication date: February 14, 2013
    Applicant: Microsoft Corporation
    Inventors: Jen Anderson, Eric Christian Brown, Jennifer Teed, Goran Predovic, Bruce Edward James, Fei Su, Maybelle Lippert, Mudit Agrawal