Patents by Inventor Sheridan Martin Small
Sheridan Martin Small has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10223832Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: GrantFiled: September 25, 2015Date of Patent: March 5, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Patent number: 10019962Abstract: A user interface includes a virtual object having an appearance in context with a real environment of a user using a see-through, near-eye augmented reality display device system. A virtual type of object and at least one real world object are selected based on compatibility criteria for forming a physical connection like attachment, supporting or integration of the virtual object with the at least one real object. Other appearance characteristics, e.g. color, size or shape, of the virtual object are selected for satisfying compatibility criteria with the selected at least one real object. Additionally, a virtual object type and appearance characteristics of the virtual object may be selected based on a social context of the user, a personal context of the user or both.Type: GrantFiled: August 17, 2011Date of Patent: July 10, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: James C. Liu, Anton O. Andrews, Benjamin I. Vaught, Craig R. Maitlen, Christopher M. Novak, Sheridan Martin Small
-
Patent number: 9583032Abstract: Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD.Type: GrantFiled: June 5, 2012Date of Patent: February 28, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, Jr., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Sheridan Martin Small, Stephen G. Latta
-
Patent number: 9342610Abstract: A see-through head-mounted display (HMD) device provides an augmented reality image which is associated with a real-world object, such as a picture frame, wall or billboard. Initially, the object is identified by a user, e.g., based on the user gazing at the object for a period of time, making a gesture such as pointing at the object and/or providing a verbal command. The location and visual characteristics of the object are determined by a front-facing camera of the HMD device, and stored in a record. The user selects from among candidate data streams, such as a web page, game feed, video or stocker ticker. Subsequently, when the user is in the location of the object and looks at the object, the HMD device matches the visual characteristics to the record to identify the data stream, and displays corresponding augmented reality images registered to the object.Type: GrantFiled: August 25, 2011Date of Patent: May 17, 2016Assignee: Microsoft Technology Licensing, LLCInventors: James Chia-Ming Liu, Anton Oguzhan Alford Andrews, Craig R. Maitlen, Christopher M. Novak, Darren A. Bennett, Sheridan Martin Small
-
Publication number: 20160086382Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: ApplicationFiled: September 25, 2015Publication date: March 24, 2016Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Patent number: 9229231Abstract: The technology provides for updating printed content with personalized virtual data using a see-through, near-eye, mixed reality display device system. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. Virtual data is selected from available virtual data for the printed content selection based on user profile data, and the display device system displays the selected virtual data in a position registered to the position of the printed content selection. In some examples, a task related to the printed content item is determined based on physical action user input, and personalized virtual data is displayed registered to the printed content item in accordance with the task.Type: GrantFiled: January 9, 2012Date of Patent: January 5, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Patent number: 9182815Abstract: The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content.Type: GrantFiled: December 7, 2011Date of Patent: November 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Patent number: 9183807Abstract: The technology provides embodiments for displaying virtual data as printed content by a see-through, near-eye, mixed reality display device system. One or more literary content items registered to a reading object in a field of view of the display device system are displayed with print layout characteristics. Print layout characteristics from a publisher of each literary content item are selected if available. The reading object has a type like a magazine, book, journal or newspaper and may be a real object or a virtual object displayed by the display device system. The reading object type of the virtual object is based on a reading object type associated with a literary content item to be displayed. Virtual augmentation data registered to a literary content item is displayed responsive to detecting user physical action in image data. An example of a physical action is a page flipping gesture.Type: GrantFiled: January 10, 2012Date of Patent: November 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Patent number: 9153195Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: GrantFiled: January 30, 2012Date of Patent: October 6, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Patent number: 9146398Abstract: Techniques are provided for displaying electronic communications using a head mounted display (HMD). Each electronic communication may be displayed to represent a physical object that indentifies it as a specific type or nature of electronic communication. Therefore, the user is able to process the electronic communications more efficiently. In some aspects, computer vision allows a user to interact with the representation of the physical objects. One embodiment includes accessing electronic communications, and determining physical objects that are representative of at least a subset of the electronic communications. A head mounted display (HMD) is instructed how to display a representation of the physical objects in this embodiment.Type: GrantFiled: July 12, 2011Date of Patent: September 29, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Stephen G. Latta, Sheridan Martin Small, James C. Liu, Benjamin I. Vaught, Darren Bennett
-
Patent number: 9122053Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.Type: GrantFiled: April 10, 2012Date of Patent: September 1, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, Jr.
-
Patent number: 8667519Abstract: A system for generating passive and anonymous feedback of multimedia content viewed by users is disclosed. The multimedia content may include recorded video content, video-on-demand content, television content, television programs, advertisements, commercials, music, movies, video clips, and other on-demand media content. One or more of the users in a field of view of a capture device connected to the computing device are identified. An engagement level of the users to multimedia content being viewed by the users is determined by tracking movements, gestures, postures and facial expressions performed by the users. A report of the response to viewed multimedia content is generated based on the movements, gestures, postures and facial expressions performed by the users. The report is provided to rating agencies, content providers and advertisers.Type: GrantFiled: November 12, 2010Date of Patent: March 4, 2014Assignee: Microsoft CorporationInventors: Sheridan Martin Small, Andrew Fuller, Avi Bar-Zeev, Kathryn Stone Perez
-
Patent number: 8640021Abstract: A system and method are disclosed for delivering content customized to the specific user or users interacting with the system. The system includes one or more modules for recognizing an identity of a user. These modules may include for example a gesture recognition engine, a facial recognition engine, a body language recognition engine and a voice recognition engine. The user may also be carrying a mobile device such as a smart phone which identifies the user. One or more of these modules may cooperate to identify a user, and then customize the user's content based on the user's identity. In particular, the system receives user preferences indicating the content a user wishes to receive and the conditions under which it is to be received. Based on the user preferences and recognition of a user identity and/or other traits, the system presents content customized for a particular user.Type: GrantFiled: November 12, 2010Date of Patent: January 28, 2014Assignee: Microsoft CorporationInventors: Kathryn Stone Perez, Andrew Fuller, Avi Bar-Zeev, Sheridan Martin Small
-
Publication number: 20130321255Abstract: Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD.Type: ApplicationFiled: June 5, 2012Publication date: December 5, 2013Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Sheridan Martin Small, Stephen G. Latta
-
Publication number: 20130174213Abstract: A system for automatically sharing virtual objects between different mixed reality environments is described. In some embodiments, a see-through head-mounted display device (HMD) automatically determines a privacy setting associated with another HMD by inferring a particular social relationship with a person associated with the other HMD (e.g., inferring that the person is a friend or acquaintance). The particular social relationship may be inferred by considering the distance to the person associated with the other HMD, the type of environment (e.g., at home or work), and particular physical interactions involving the person (e.g., handshakes or hugs). The HMD may subsequently transmit one or more virtual objects associated with the privacy setting to the other HMD. The HMD may also receive and display one or more other virtual objects from the other HMD based on the privacy setting.Type: ApplicationFiled: November 29, 2012Publication date: July 4, 2013Inventors: James Liu, Stephen Latta, Anton O.A. Andrews, Benjamin Isaac Vaught, Sheridan Martin Small
-
Publication number: 20130147836Abstract: The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content.Type: ApplicationFiled: December 7, 2011Publication date: June 13, 2013Inventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Publication number: 20130147687Abstract: The technology provides embodiments for displaying virtual data as printed content by a see-through, near-eye, mixed reality display device system. One or more literary content items registered to a reading object in a field of view of the display device system are displayed with print layout characteristics. Print layout characteristics from a publisher of each literary content item are selected if available. The reading object has a type like a magazine, book, journal or newspaper and may be a real object or a virtual object displayed by the display device system. The reading object type of the virtual object is based on a reading object type associated with a literary content item to be displayed. Virtual augmentation data registered to a literary content item is displayed responsive to detecting user physical action in image data. An example of a physical action is a page flipping gesture.Type: ApplicationFiled: January 10, 2012Publication date: June 13, 2013Inventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Publication number: 20130147838Abstract: The technology provides for updating printed content with personalized virtual data using a see-through, near-eye, mixed reality display device system. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. Virtual data is selected from available virtual data for the printed content selection based on user profile data, and the display device system displays the selected virtual data in a position registered to the position of the printed content selection. In some examples, a task related to the printed content item is determined based on physical action user input, and personalized virtual data is displayed registered to the printed content item in accordance with the task.Type: ApplicationFiled: January 9, 2012Publication date: June 13, 2013Inventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Publication number: 20130050258Abstract: A see-through head-mounted display (HMD) device provides an augmented reality image which is associated with a real-world object, such as a picture frame, wall or billboard. Initially, the object is identified by a user, e.g., based on the user gazing at the object for a period of time, making a gesture such as pointing at the object and/or providing a verbal command. The location and visual characteristics of the object are determined by a front-facing camera of the HMD device, and stored in a record. The user selects from among candidate data streams, such as a web page, game feed, video or stocker ticker. Subsequently, when the user is in the location of the object and looks at the object, the HMD device matches the visual characteristics to the record to identify the data stream, and displays corresponding augmented reality images registered to the object.Type: ApplicationFiled: August 25, 2011Publication date: February 28, 2013Inventors: James Chia-Ming Liu, Anton Oguzhan Alford Andrews, Craig R. Maitlen, Christopher M. Novak, Darren A. Bennett, Sheridan Martin Small
-
Publication number: 20130044130Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: ApplicationFiled: January 30, 2012Publication date: February 21, 2013Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small