Patents by Inventor Desney S Tan

Desney S Tan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130165160
    Abstract: Systems and methods are described relating to determining a specified time period of non-movement in a mobile device and presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement. Additionally, systems and methods are described relating to means for determining a specified time period of non-movement in a mobile device and means for presenting an indication of location of the mobile device at least partially based on the specified time period of non-movement.
    Type: Application
    Filed: December 30, 2011
    Publication date: June 27, 2013
    Inventors: Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, Matthew G. Dyor, William Gates, Pablos Holman, Roderick A. Hyde, Muriel Y. Ishikawa, Jordin T. Kare, Royce A. Levien, Richard T. Lord, Robert W. Lord, Mark A. Malamud, Craig J. Mundie, Nathan P. Myhrvold, Tim Paek, John D. Rinaldo, JR., Desney S. Tan, Clarence T. Tegreene, Charles Whitmer, Lowell L. Wood, JR., Victoria Y. H. Wood, Lin Zhong
  • Publication number: 20130154919
    Abstract: The description relates to user control gestures. One example allows a speaker and a microphone to perform a first functionality. The example simultaneously utilizes the speaker and the microphone to perform a second functionality. The second functionality comprises capturing sound signals that originated from the speaker with the microphone and detecting Doppler shift in the sound signals. It correlates the Doppler shift with a user control gesture performed proximate to the computer and maps the user control gesture to a control function.
    Type: Application
    Filed: December 20, 2011
    Publication date: June 20, 2013
    Applicant: Microsoft Corporation
    Inventors: Desney S. Tan, Shwetak Patel, Daniel S. Morris, Sidhant Gupta
  • Publication number: 20130142347
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing audio signals. An example AEFS receives data that represents an audio signal emitted by a vehicle. The AEFS analyzes the audio signal to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
    Type: Application
    Filed: January 31, 2012
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130142393
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured perform vehicular threat detection based at least in part on analyzing image data. An example AEFS receives data that represents an image of a vehicle. The AEFS analyzes the received data to determine vehicular threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined vehicular threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
    Type: Application
    Filed: February 28, 2012
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130142365
    Abstract: Techniques for sensory enhancement and augmentation are described. Some embodiments provide an audible assistance facilitator system (“AAFS”) configured to provide audible assistance to a user via a hearing device. In one embodiment, the AAFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media device, or the like. The AAFS identifies the speaker based on the received data, such as by performing speaker recognition. The AAFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AAFS then informs the user of the speaker-related information, such as by causing an audio representation of the speaker-related information to be output via the hearing device.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130144603
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. Some embodiments of the AEFS enhance voice conferencing by recording and presenting voice conference history information based on speaker-related information. The AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS records conference history information (e.g., a transcript) based on the determined speaker-related information.
    Type: Application
    Filed: February 15, 2012
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130144595
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to automatically translate utterances from a first to a second language, based on speaker-related information determined from speaker utterances and/or other sources of information. In one embodiment, the AEFS receives data that represents an utterance of a speaker in a first language, the utterance obtained by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS then determines speaker-related information associated with the identified speaker, such as by determining demographic information (e.g., gender, language, country/region of origin) and/or identifying information (e.g., name or title) of the speaker. The AEFS translates the utterance in the first language into a message in a second language, based on the determined speaker-related information.
    Type: Application
    Filed: December 29, 2011
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130141576
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance a user's ability to operate or function in a transportation-related context as a pedestrian or a vehicle operator. In one embodiment, the AEFS is configured to perform vehicular threat detection based on information received at a road-based device, such as a sensor or processor that is deployed at the side of a road. An example AEFS receives, at a road-based device, information about a first vehicle that is proximate to the road-based device. The AEFS analyzes the received information to determine threat information, such as that the vehicle may collide with the user. The AEFS then informs the user of the determined threat information, such as by transmitting a warning to a wearable device configured to present the warning to the user.
    Type: Application
    Filed: March 20, 2012
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130144623
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to determine and present speaker-related information based on speaker utterances. In one embodiment, the AEFS receives data that represents an utterance of a speaker received by a hearing device of the user, such as a hearing aid, smart phone, media player/device, or the like. The AEFS identifies the speaker based on the received data, such as by performing speaker recognition. The AEFS determines speaker-related information associated with the identified speaker, such as by determining an identifier (e.g., name or title) of the speaker, by locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs the user of the speaker-related information, such as by presenting the speaker-related information on a display of the hearing device or some other device accessible to the user.
    Type: Application
    Filed: December 13, 2011
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130144490
    Abstract: Techniques for ability enhancement are described. In some embodiments, devices and systems located in a transportation network share threat information with one another, in order to enhance a user's ability to operate or function in a transportation-related context. In one embodiment, a process in a vehicle receives threat information from a remote device, the threat information based on information about objects or conditions proximate to the remote device. The process then determines that the threat information is relevant to the safe operation of the vehicle. Then, the process modifies operation of the vehicle based on the threat information, such as by presenting a message to the operator of the vehicle and/or controlling the vehicle itself.
    Type: Application
    Filed: March 29, 2012
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Douglas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Publication number: 20130144619
    Abstract: Techniques for ability enhancement are described. Some embodiments provide an ability enhancement facilitator system (“AEFS”) configured to enhance voice conferencing among multiple speakers. In one embodiment, the AEFS receives data that represents utterances of multiple speakers who are engaging in a voice conference with one another. The AEFS then determines speaker-related information, such as by identifying a current speaker, locating an information item (e.g., an email message, document) associated with the speaker, or the like. The AEFS then informs a user of the speaker-related information, such as by presenting the speaker-related information on a display of a conferencing device associated with the user.
    Type: Application
    Filed: January 23, 2012
    Publication date: June 6, 2013
    Inventors: Richard T. Lord, Robert W. Lord, Nathan P. Myhrvold, Clarence T. Tegreene, Roderick A. Hyde, Lowell L. Wood, JR., Muriel Y. Ishikawa, Victoria Y.H. Wood, Charles Whitmer, Paramvir Bahl, Doughlas C. Burger, Ranveer Chandra, William H. Gates, III, Paul Holman, Jordin T. Kare, Craig J. Mundie, Tim Paek, Desney S. Tan, Lin Zhong, Matthew G. Dyor
  • Patent number: 8421634
    Abstract: Described is using the human body as an input mechanism to a computing device. A sensor set is coupled to part of a human body. The sensor set detects mechanical (e.g., bio-acoustic) energy transmitted through the body as a result of an action/performed by the body, such as a user finger tap or flick. The sensor output data (e.g., signals) are processed to determine what action was taken. For example, the gesture may be a finger tap, and the output data may indicate which finger was tapped, what surface the finger was tapped on, or where on the body the finger was tapped.
    Type: Grant
    Filed: December 4, 2009
    Date of Patent: April 16, 2013
    Assignee: Microsoft Corporation
    Inventors: Desney S. Tan, Dan Morris, Christopher Harrison
  • Publication number: 20130082978
    Abstract: Embodiments of the present invention relate to systems, methods and computer storage media for detecting user input in an extended interaction space of a device, such as a handheld device. The method and system allow for utilizing a first sensor of the device sensing in a positive z-axis space of the device to detect a first input, such as a user's non-device-contacting gesture. The method and system also contemplate utilizing a second sensor of the device sensing in a negative z-axis space of the device to detect a second input. Additionally, the method and system contemplate updating a user interface presented on a display in response to detecting the first input by the first sensor in the positive z-axis space and detecting the second input by the second sensor in the negative z-axis space.
    Type: Application
    Filed: September 30, 2011
    Publication date: April 4, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Eric Horvitz, Kenneth P. Hinckley, Hrvoje Benko, Desney S. Tan
  • Publication number: 20130086674
    Abstract: Embodiments of the present invention relate to systems, methods, and computer storage media for identifying, authenticating, and authorizing a user to a device. A dynamic image, such as a video captured by a depth camera, is received. The dynamic image provides data from which geometric information of a portion of a user may be identified as well as motion information of a portion of the user may be identified. Consequently, a geometric attribute is identified from the geometric information. A motion attribute may also be identified from the motion information. The geometric attribute is compared to one or more geometric attributes associated with authorized users. Additionally, the motion attribute may be compared to one or more motion attributes associated with the authorized users. A determination may be made that the user is an authorized user. As such the user is authorized to utilize functions of the device.
    Type: Application
    Filed: September 30, 2011
    Publication date: April 4, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: ERIC HORVITZ, DESNEY S. TAN, JAMES CHIA-MING LIU
  • Patent number: 8392229
    Abstract: A system that can enable the atomization of application functionality in connection with an activity-centric system is provided. The system can be utilized as a programmatic tool that decomposes an application's constituent functionality into atoms thereafter monitoring and aggregating atoms with respect to a particular activity. In doing so, the functionality of the system can be scaled based upon complexity and needs of the activity. Additionally, the system can be employed to monetize the atoms or activity capabilities based upon respective use.
    Type: Grant
    Filed: June 24, 2011
    Date of Patent: March 5, 2013
    Assignee: Microsoft Corporation
    Inventors: Steven W. Macbeth, Roland L. Fernandez, Brian R. Meyers, Desney S. Tan, George G. Robertson, Nuria M. Oliver, Oscar E. Murillo, Elin R. Pedersen
  • Patent number: 8364514
    Abstract: A unique monitoring system and method is provided that involves monitoring user activity in order to facilitate managing and optimizing the utilization of various system resources. In particular, the system can monitor user activity, detect when users need assistance with their specific activities, and identify at least one other user that can assist them. Assistance can be in the form of answering questions, providing guidance to the user as the user completes the activity, or completing the activity such as in the case of taking on an assigned activity. In addition, the system can aggregate activity data across users and/or devices. As a result, problems with activity templates or activities themselves can be more readily identified, user performance can be readily compared, and users can communicate and exchange information regarding similar activity experiences. Furthermore, synchronicity and time-sensitive scheduling of activities between users can be facilitated and improved overall.
    Type: Grant
    Filed: June 27, 2006
    Date of Patent: January 29, 2013
    Assignee: Microsoft Corporation
    Inventors: Steven W. Macbeth, Roland L. Fernandez, Brian R. Meyers, Desney S. Tan, George G. Robertson, Nuria M. Oliver, Oscar E. Murillo, Mary P. Czerwinski
  • Patent number: 8306940
    Abstract: A real-time visual feedback ensemble classifier generator and method for interactively generating an optimal ensemble classifier using a user interface. Embodiments of the real-time visual feedback ensemble classifier generator and method use a weight adjustment operation and a partitioning operation in the interactive generation process. In addition, the generator and method include a user interface that provides real-time visual feedback to a user so that the user can see how the weight adjustment and partitioning operation affect the overall accuracy of the ensemble classifier. Using the user interface and the interactive controls available on the user interface, a user can iteratively use one or both of the weigh adjustment operation and partitioning operation to generate an optimized ensemble classifier.
    Type: Grant
    Filed: March 20, 2009
    Date of Patent: November 6, 2012
    Assignee: Microsoft Corporation
    Inventors: Bongshin Lee, Ashish Kapoor, Desney S. Tan, Justin Talbot
  • Publication number: 20120197876
    Abstract: Described herein are technologies pertaining to automatic generation of an executive summary (explanation) of a medical event in an electronic medical record (EMR) of a patient. A medical event in the EMR is automatically identified, and a search is conducted over a document corpus based upon the identified medical event. A document retrieved as a result of the search is analyzed for a portion of text to act as an executive summary for the medical event. Each portion of text in the document is assigned a score, and the portion of text assigned the highest score is utilized as the executive summary for the medical event.
    Type: Application
    Filed: February 1, 2011
    Publication date: August 2, 2012
    Applicant: Microsoft Corporation
    Inventors: Daniel Scott Morris, Desney S. Tan, Lauren Gabrielle Wilcox-Patterson, Gregory R. Smith, Amy Kathleen Karlson, Asta Jane Roseway
  • Publication number: 20120183206
    Abstract: An interactive concept learning image search technique that allows end-users to quickly create their own rules for re-ranking images based on the image characteristics of the images. The image characteristics can include visual characteristics as well as semantic features or characteristics, or may include a combination of both. End-users can then rank or re-rank any current or future image search results according to their rule or rules. End-users provide examples of images each rule should match and examples of images the rule should reject. The technique learns the common image characteristics of the examples, and any current or future image search results can then be ranked or re-ranked according to the learned rules.
    Type: Application
    Filed: March 24, 2012
    Publication date: July 19, 2012
    Applicant: Microsoft Corporation
    Inventors: Desney S. Tan, Ashish Kapoor, Simon A. J. Winder, James A. Fogarty
  • Publication number: 20120162057
    Abstract: A human input system is described herein that provides an interaction modality that utilizes the human body as an antenna to receive electromagnetic noise that exists in various environments. By observing the properties of the noise picked up by the body, the system can infer human input on and around existing surfaces and objects. Home power lines have been shown to be a relatively good transmitting antenna that creates a particularly noisy environment. The human input system leverages the body as a receiving antenna and electromagnetic noise modulation for gestural interaction. It is possible to robustly recognize touched locations on an uninstrumented home wall using no specialized sensors. The receiving device for which the human body is the antenna can be built into common, widely available electronics, such as mobile phones or other devices the user is likely to commonly carry.
    Type: Application
    Filed: December 22, 2010
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Desney S. Tan, Daniel S. Morris, Gabriel A. Cohn, Shwetak N. Patel