Patents by Inventor Andrew David Wilson
Andrew David Wilson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11429272Abstract: A multi-factor probabilistic model evaluates user input to determine if the user input was intended for an on-screen user interface control. When user input is received, a probability is computed that the user input was intended for each on-screen user interface control. The user input is then associated with the user interface control that has the highest computed probability. The probability that user input was intended for each user interface control may be computed utilizing a multitude of factors including the probability that the user input is near each user interface control, the probability that the motion of the user input is consistent with the user interface control, the probability that the shape of the user input is consistent with the user interface control, and that the size of the user input is consistent with the user interface control.Type: GrantFiled: March 26, 2010Date of Patent: August 30, 2022Assignee: Microsoft Technology Licensing, LLCInventor: Andrew David Wilson
-
Patent number: 11221669Abstract: Systems and methods related to engaging with a virtual assistant via ancillary input are provided. Ancillary input may refer to non-verbal, non-tactile input based on eye-gaze data and/or eye-gaze attributes, including but not limited to, facial recognition data, motion or gesture detection, eye-contact data, head-pose or head-position data, and the like. Thus, to initiate and/or maintain interaction with a virtual assistant, a user need not articulate an attention word or words. Rather the user may initiate and/or maintain interaction with a virtual assistant more naturally and may even include the virtual assistant in a human conversation with multiple speakers. The virtual assistant engagement system may utilize at least one machine-learning algorithm to more accurately determine whether a user desires to engage with and/or maintain interaction with a virtual assistant. Various hardware configurations associated with a virtual assistant device may allow for both near-field and/or far-field engagement.Type: GrantFiled: December 20, 2017Date of Patent: January 11, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Ryen William White, Andrew David Wilson, Gregg Robert Wygonik, Nirupama Chandrasekaran, Sean Edward Andrist
-
Publication number: 20190324528Abstract: A head mounted display device provides offset adjustments for gaze points provided by an eye tracking component. In a model generation phase, heuristics are used to estimate a gaze point of the user based on the gaze point provided by the eye tracking component and features that are visible in the field of view of the user. The features may include objects, edges, faces, and text. If the estimated gaze point is different than the gaze point that was provided by the eye tracking component, the difference is used to train a model along with a confidence value that reflects the strength of the estimated gaze point. In an adjustment phase, when the user is using an application that relies on the eye tracking component, the generated model is used to determine offsets to adjust the gaze points that are provided by the eye tracking component.Type: ApplicationFiled: April 20, 2018Publication date: October 24, 2019Inventors: Shane Frandon WILLIAMS, James TICHENOR, Sophie STELLMACH, Andrew David WILSON
-
Patent number: 10409381Abstract: Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.Type: GrantFiled: August 10, 2015Date of Patent: September 10, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Meredith June Morris, Eric J. Horvitz, Andrew David Wilson, F. David Jones, Stephen E. Hodges, Kenneth P. Hinckley, David Alexander Butler, Ian M. Sands, V. Kevin Russ, Hrvoje Benko, Shawn R. LeProwse, Shahram Izadi, William Ben Kunz
-
Publication number: 20190187787Abstract: Systems and methods related to engaging with a virtual assistant via ancillary input are provided. Ancillary input may refer to non-verbal, non-tactile input based on eye-gaze data and/or eye-gaze attributes, including but not limited to, facial recognition data, motion or gesture detection, eye-contact data, head-pose or head-position data, and the like. Thus, to initiate and/or maintain interaction with a virtual assistant, a user need not articulate an attention word or words. Rather the user may initiate and/or maintain interaction with a virtual assistant more naturally and may even include the virtual assistant in a human conversation with multiple speakers. The virtual assistant engagement system may utilize at least one machine-learning algorithm to more accurately determine whether a user desires to engage with and/or maintain interaction with a virtual assistant. Various hardware configurations associated with a virtual assistant device may allow for both near-field and/or far-field engagement.Type: ApplicationFiled: December 20, 2017Publication date: June 20, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Ryen William WHITE, Andrew David Wilson, Gregg Robert Wygonik, Nirupama Chandrasekaran, Sean Edward Andrist
-
Patent number: 9652042Abstract: Architecture for implementing a perceptual user interface. The architecture comprises alternative modalities for controlling computer application programs and manipulating on-screen objects through hand gestures or a combination of hand gestures and verbal commands. The perceptual user interface system includes a tracking component that detects object characteristics of at least one of a plurality of objects within a scene, and tracks the respective object. Detection of object characteristics is based at least in part upon image comparison of a plurality of images relative to a course mapping of the images. A seeding component iteratively seeds the tracking component with object hypotheses based upon the presence of the object characteristics and the image comparison. A filtering component selectively removes the tracked object from the object hypotheses and/or at least one object hypothesis from the set of object hypotheses based upon predetermined removal criteria.Type: GrantFiled: February 12, 2010Date of Patent: May 16, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Andrew David Wilson, Nuria M. Oliver
-
Patent number: 9613261Abstract: Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements.Type: GrantFiled: August 11, 2014Date of Patent: April 4, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Andrew David Wilson, Christian Holz
-
Patent number: 9509981Abstract: Architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room). The cameras and projectors are calibrated, allowing the development of a multi-dimensional (e.g., 3D) model of the objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects. The architecture incorporates the depth data from all depth cameras, as well as color information, into a unified multi-dimensional model in combination with calibrated projectors. In order to provide visual continuity when transferring objects between different locations in the space, the user's body can provide a canvas on which to project this interaction. As the user moves body parts in the space, without any other object, the body parts can serve as temporary “screens” for “in-transit” data.Type: GrantFiled: May 19, 2014Date of Patent: November 29, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Andrew David Wilson, Hrvoje Benko
-
Patent number: 9430093Abstract: One or more techniques and/or systems are provided for monitoring interactions by an input object with an interactive interface projected onto an interface object. That is, an input object (e.g., a finger) and an interface object (e.g., a wall, a hand, a notepad, etc.) may be identified and tracked in real-time using depth data (e.g., depth data extracted from images captured by a depth camera). An interactive interface (e.g., a calculator, an email program, a keyboard, etc.) may be projected onto the interface object, such that the input object may be used to interact with the interactive interface. For example, the input object may be tracked to determine whether the input object is touching or hovering above the interface object and/or a projected portion of the interactive interface. If the input object is in a touch state, then a corresponding event associated with the interactive interface may be invoked.Type: GrantFiled: December 31, 2013Date of Patent: August 30, 2016Assignee: Microsoft Technology Licensing, LLC.Inventors: Chris Harrison, Hrvoje Benko, Andrew David Wilson
-
Patent number: 9171454Abstract: The claimed subject matter relates to an architecture that can facilitate rich interaction with and/or management of environmental components included in an environment. The architecture can exist in whole or in part in a housing that can resemble a wand or similar object. The architecture can utilize one or more sensor from a collection of sensors to determine an orientation or gesture in connection with the wand, and can further issue an instruction to update a state of an environmental component based upon the orientation. In addition, the architecture can include an advisor component to provide contextual and/or comprehensive guidance in an intuitive manner.Type: GrantFiled: November 14, 2007Date of Patent: October 27, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Andrew David Wilson, James E. Allard, Michael H. Cohen, Steven Drucker, Yu-Ting Kuo
-
Patent number: 9134798Abstract: Aspects relate to detecting gestures that relate to a desired action, wherein the detected gestures are common across users and/or devices within a surface computing environment. Inferred intentions and goals based on context, history, affordances, and objects are employed to interpret gestures. Where there is uncertainty in intention of the gestures for a single device or across multiple devices, independent or coordinated communication of uncertainty or engagement of users through signaling and/or information gathering can occur.Type: GrantFiled: December 15, 2008Date of Patent: September 15, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Meredith June Morris, Eric J. Horvitz, Andrew David Wilson, F. David Jones, Stephen E. Hodges, Kenneth P. Hinckley, David Alexander Butler, Ian M. Sands, V. Kevin Russ, Hrvoje Benko, Shawn R. LeProwse, Shahram Izadi, William Ben Kunz
-
Publication number: 20150199480Abstract: Various technologies described herein pertain to controlling performance of a health assessment of a user in an entertainment venue. Data in a health record of the user is accessed, where the health record is retained in computer-readable storage. The user is located at the entertainment venue, and the entertainment venue includes an attraction. A health parameter of the user to be measured as part of the health assessment performed in the entertainment venue is selected based on the data in the health record of the user. Further, an interaction between the user and the attraction of the entertainment venue is controlled based on the health parameter to be measured. Data indicative of the health parameter of the user is computed based on a signal output by a sensor. The signal is output by the sensor during the interaction between the user and the attraction of the entertainment venue.Type: ApplicationFiled: December 11, 2014Publication date: July 16, 2015Inventors: Paul Henry Dietz, Desney S. Tan, Timothy Scott Saponas, Daniel Scott Morris, Andrew David Wilson
-
Publication number: 20150199484Abstract: Various technologies described herein pertain to adjust recommended dosages of a medication for a user in a non-clinical environment. The medication can be identified and an indication of a symptom of the user desirably managed by the medication can be received. An initial recommended dosage of the medication can be determined based on static data of the user and the symptom. Dynamic data indicative of efficacy of the medication for the user over time in the non-clinical environment can be collected from sensor(s) in the non-clinical environment. The dynamic data indicative of the efficacy of the medication can include data indicative of the symptom and data indicative of a side effect of the user resulting from the medication. A subsequent recommended dosage of the medication can be refined based on the static data of the user and the dynamic data indicative of the efficacy of the medication for the user.Type: ApplicationFiled: September 29, 2014Publication date: July 16, 2015Inventors: Daniel Scott Morris, Desney S. Tan, Timothy Scott Saponas, Paul Henry Dietz, Andrew David Wilson
-
Publication number: 20150196209Abstract: Various technologies described herein pertain to sensing cardiovascular risk factors of a user. A chair includes one or more sensors configured to output signals indicative of conditions at site(s) on a body of a user. A seat of the chair, a back of the chair, and/or arms of the chair can include the sensor(s). Moreover, the chair includes a collection circuit configured to receive the signals from the sensor(s). A risk factor evaluation component is configured to detect a pulse wave velocity of the user based on the signals from the sensor(s). The risk factor evaluation component is further configured to perform a pulse wave analysis of the user based on a morphology of a pulse pressure waveform of the user, and the pulse pressure waveform is detected based on the signals from the sensor(s).Type: ApplicationFiled: October 24, 2014Publication date: July 16, 2015Inventors: Daniel Scott Morris, Desney S. Tan, Timothy Scott Saponas, Paul Henry Dietz, Andrew David Wilson, Alice Jane Bernheim Brush, Erin Rebecca Griffiths
-
Patent number: 8982061Abstract: In embodiments of angular contact geometry, touch input sensor data is recognized as a touch input on a touch-screen display, such as a touch-screen display integrated in a mobile phone or portable computing device. A sensor map is generated from the touch input sensor data, and the sensor map represents the touch input. The sensor map can be generated as a two-dimensional array of elements that correlate to sensed contact from a touch input. An ellipse can then be determined that approximately encompasses elements of the sensor map, and the ellipse represents a contact shape of the touch input.Type: GrantFiled: May 2, 2011Date of Patent: March 17, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Weidong Zhao, David A. Stevens, Aleksandar Uzelac, Takahiro Shigemitsu, Andrew David Wilson, Nigel Stuart Keam
-
Patent number: 8952894Abstract: The claimed subject matter provides a system and/or a method that facilitates detecting a plurality of inputs simultaneously. A laser component can be coupled to a line generating (LG) optic that can create a laser line from an infrared (IR) laser spot, wherein the laser component and line generating (LG) optic emit a plane of IR light. A camera device can capture a portion of imagery within an area covered by the plane of light. The camera device can be coupled to an IR-pass filter that can block visible light and pass IR light in order to detect a break in the emitted plane of IR light. An image processing component can ascertain a location of the break within the area covered by the emitted plane of IR light.Type: GrantFiled: May 12, 2008Date of Patent: February 10, 2015Assignee: Microsoft Technology Licensing, LLCInventor: Andrew David Wilson
-
Publication number: 20150030236Abstract: Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements.Type: ApplicationFiled: August 11, 2014Publication date: January 29, 2015Inventors: Andrew David Wilson, Christian Holz
-
Patent number: 8928579Abstract: Concepts and technologies are described herein for interacting with an omni-directionally projected display. The omni-directionally projected display includes, in some embodiments, visual information projected on a display surface by way of an omni-directional projector. A user is able to interact with the projected visual information using gestures in free space, voice commands, and/or other tools, structures, and commands. The visual information can be projected omni-directionally, to provide a user with an immersive interactive experience with the projected display. The concepts and technologies disclosed herein can support more than one interacting user. Thus, the concepts and technologies disclosed herein may be employed to provide a number of users with immersive interactions with projected visual information.Type: GrantFiled: February 22, 2010Date of Patent: January 6, 2015Inventors: Andrew David Wilson, Hrvoje Benko
-
Publication number: 20140253692Abstract: Architecture that combines multiple depth cameras and multiple projectors to cover a specified space (e.g., a room). The cameras and projectors are calibrated, allowing the development of a multi-dimensional (e.g., 3D) model of the objects in the space, as well as the ability to project graphics in a controlled fashion on the same objects. The architecture incorporates the depth data from all depth cameras, as well as color information, into a unified multi-dimensional model in combination with calibrated projectors. In order to provide visual continuity when transferring objects between different locations in the space, the user's body can provide a canvas on which to project this interaction. As the user moves body parts in the space, without any other object, the body parts can serve as temporary “screens” for “in-transit” data.Type: ApplicationFiled: May 19, 2014Publication date: September 11, 2014Applicant: Microsoft CorporationInventors: Andrew David Wilson, Hrvoje Benko
-
Patent number: 8811719Abstract: Three-dimensional (3-D) spatial image data may be received that is associated with at least one arm motion of an actor based on free-form movements of at least one hand of the actor, based on natural gesture motions of the at least one hand. A plurality of sequential 3-D spatial representations that each include 3-D spatial map data corresponding to a 3-D posture and position of the hand at sequential instances of time during the free-form movements may be determined, based on the received 3-D spatial image data. An integrated 3-D model may be generated, via a spatial object processor, based on incrementally integrating the 3-D spatial map data included in the determined sequential 3-D spatial representations and comparing a threshold time value with model time values indicating numbers of instances of time spent by the hand occupying a plurality of 3-D spatial regions during the free-form movements.Type: GrantFiled: April 29, 2011Date of Patent: August 19, 2014Assignee: Microsoft CorporationInventors: Andrew David Wilson, Christian Holz