Patents by Inventor Alex Aben-Athar Kipman
Alex Aben-Athar Kipman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9268406Abstract: Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.Type: GrantFiled: June 29, 2012Date of Patent: February 23, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Michael J. Scavezze, Daniel J. McCulloch, Darren Bennett, Jason Scott, Ryan L. Hastings, Brian E. Keane, Christopher E. Miles, Robert L. Crocco, Jr., Mathew J. Lamb
-
Patent number: 9256987Abstract: Methods for tracking the head position of an end user of a head-mounted display device (HMD) relative to the HMD are described. In some embodiments, the HMD may determine an initial head tracking vector associated with an initial head position of the end user relative to the HMD, determine one or more head tracking vectors corresponding with one or more subsequent head positions of the end user relative to the HMD, track head movements of the end user over time based on the initial head tracking vector and the one or more head tracking vectors, and adjust positions of virtual objects displayed to the end user based on the head movements. In some embodiments, the resolution and/or number of virtual objects generated and displayed to the end user may be modified based on a degree of head movement of the end user relative to the HMD.Type: GrantFiled: June 24, 2013Date of Patent: February 9, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Nathan Ackerman, Drew Steedly, Andy Hodge, Alex Aben-Athar Kipman
-
Patent number: 9235051Abstract: A see-through head mounted display apparatus includes a display and a processor. The processor determines geo-located positions of points of interest within a field of view and generates markers indicating information regarding an associated real world object is available to the user. Markers are rendered in the display relative to the geo-located position and the field of view of the user. When a user selects a marker though a user gesture, the device displays a near-field virtual object having a visual tether to the marker simultaneously with the marker. The user may interact with the marker to view, add or delete information associated with the point of interest.Type: GrantFiled: June 18, 2013Date of Patent: January 12, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Tom G. Salter, Ben J. Sugden, Daniel Deptford, Robert L. Crocco, Jr., Brian E. Keane, Laura K. Massey, Alex Aben-Athar Kipman, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda
-
Patent number: 9230368Abstract: A system and method are disclosed for displaying virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects. When a user is moving through the mixed reality environment, the virtual objects may remain world-locked, so that the user can move around and explore the virtual objects from different perspectives. When the user is motionless in the mixed reality environment, the virtual objects may rotate to face the user so that the user can easily view and interact with the virtual objects.Type: GrantFiled: May 23, 2013Date of Patent: January 5, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Brian E. Keane, Ben J. Sugden, Robert L. Crocco, Jr., Daniel Deptford, Tom G. Salter, Laura K. Massey, Alex Aben-Athar Kipman, Peter Tobias Kinnebrew, Nicholas Ferianc Kamuda
-
Patent number: 9229231Abstract: The technology provides for updating printed content with personalized virtual data using a see-through, near-eye, mixed reality display device system. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. Virtual data is selected from available virtual data for the printed content selection based on user profile data, and the display device system displays the selected virtual data in a position registered to the position of the printed content selection. In some examples, a task related to the printed content item is determined based on physical action user input, and personalized virtual data is displayed registered to the printed content item in accordance with the task.Type: GrantFiled: January 9, 2012Date of Patent: January 5, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Publication number: 20150370528Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.Type: ApplicationFiled: August 31, 2015Publication date: December 24, 2015Inventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
-
Patent number: 9213163Abstract: The technology provides for automatic alignment of a see-through near-eye, mixed reality device with an inter-pupillary distance (IPD). A determination is made as to whether a see-through, near-eye, mixed reality display device is aligned with an IPD of a user. If the display device is not aligned with the IPD, the display device is automatically adjusted. In some examples, the alignment determination is based on determinations of whether an optical axis of each display optical system positioned to be seen through by a respective eye is aligned with a pupil of the respective eye in accordance with an alignment criteria. The pupil alignment may be determined based on an arrangement of gaze detection elements for each display optical system including at least one sensor for capturing data of the respective eye and the captured data. The captured data may be image data, image and glint data, and glint data only.Type: GrantFiled: August 30, 2011Date of Patent: December 15, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Kathryn Stone Perez, Alex Aben-Athar Kipman
-
Patent number: 9202443Abstract: A see-through head mounted-display and method for operating the display to optimize performance of the display by referencing a user profile automatically. The identity of the user is determined by performing an iris scan and recognition of a user enabling user profile information to be retrieved and used to enhance the user's experience with the see through head mounted display. The user profile may contain user preferences regarding services providing augmented reality images to the see-through head-mounted display, as well as display adjustment information optimizing the position of display elements in the see-though head-mounted display.Type: GrantFiled: November 29, 2012Date of Patent: December 1, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kathryn Stone Perez, Bob Crocco, Jr., John R. Lewis, Ben Vaught, Alex Aben-Athar Kipman
-
Patent number: 9183676Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.Type: GrantFiled: April 27, 2012Date of Patent: November 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis
-
Patent number: 9183807Abstract: The technology provides embodiments for displaying virtual data as printed content by a see-through, near-eye, mixed reality display device system. One or more literary content items registered to a reading object in a field of view of the display device system are displayed with print layout characteristics. Print layout characteristics from a publisher of each literary content item are selected if available. The reading object has a type like a magazine, book, journal or newspaper and may be a real object or a virtual object displayed by the display device system. The reading object type of the virtual object is based on a reading object type associated with a literary content item to be displayed. Virtual augmentation data registered to a literary content item is displayed responsive to detecting user physical action in image data. An example of a physical action is a page flipping gesture.Type: GrantFiled: January 10, 2012Date of Patent: November 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Patent number: 9182815Abstract: The technology provides embodiments for making static printed content being viewed through a see-through, mixed reality display device system more dynamic with display of virtual data. A printed content item, for example a book or magazine, is identified from image data captured by cameras on the display device, and user selection of a printed content selection within the printed content item is identified based on physical action user input, for example eye gaze or a gesture. A task in relation to the printed content selection can also be determined based on physical action user input. Virtual data for the printed content selection is displayed in accordance with the task. Additionally, virtual data can be linked to a work embodied in a printed content item. Furthermore, a virtual version of the printed material may be displayed at a more comfortable reading position and with improved visibility of the content.Type: GrantFiled: December 7, 2011Date of Patent: November 10, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Sheridan Martin Small, Alex Aben-Athar Kipman, Benjamin I. Vaught, Kathryn Stone Perez
-
Publication number: 20150310665Abstract: Embodiments are described herein for determining a stabilization plane to reduce errors that occur when a homographic transformation is applied to a scene including 3D geometry and/or multiple non-coplanar planes. Such embodiments can be used, e.g., when displaying an image on a head mounted display (HMD) device, but are not limited thereto. In an embodiment, a rendered image is generated, a gaze location of a user is determined, and a stabilization plane, associated with a homographic transformation, is determined based on the determined gaze location. This can involve determining, based on the user's gaze location, variables of the homographic transformation that define the stabilization plane. The homographic transformation is applied to the rendered image to thereby generate an updated image, and at least a portion of the updated image is then displayed.Type: ApplicationFiled: April 29, 2014Publication date: October 29, 2015Inventors: Ashraf Ayman Michail, Roger Sebastian Kevin Sylvan, Quentin Simon Charles Miller, Alex Aben-Athar Kipman
-
Patent number: 9153195Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.Type: GrantFiled: January 30, 2012Date of Patent: October 6, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
-
Patent number: 9128520Abstract: A collaborative on-demand system allows a user of a head-mounted display device (HMDD) to obtain assistance with an activity from a qualified service provider. In a session, the user and service provider exchange camera-captured images and augmented reality images. A gaze-detection capability of the HMDD allows the user to mark areas of interest in a scene. The service provider can similarly mark areas of the scene, as well as provide camera-captured images of the service provider's hand or arm pointing to or touching an object of the scene. The service provider can also select an animation or text to be displayed on the HMDD. A server can match user requests with qualified service providers which meet parameters regarding fee, location, rating and other preferences. Or, service providers can review open requests and self-select appropriate requests, initiating contact with a user.Type: GrantFiled: March 30, 2012Date of Patent: September 8, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kevin A Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J Sugden, Benjamin I Vaught, Jeffrey B Cole, Alex Aben-Athar Kipman, Ian D McIntyre, Daniel McCulloch
-
Patent number: 9123316Abstract: An audio/visual system (e.g., such as an entertainment console or other computing device) plays a base audio track, such as a portion of a pre-recorded song or notes from one or more instruments. Using a depth camera or other sensor, the system automatically detects that a user (or a portion of the user) enters a first collision volume of a plurality of collision volumes. Each collision volume of the plurality of collision volumes is associated with a different audio stem. In one example, an audio stem is a sound from a subset of instruments playing a song, a portion of a vocal track for a song, or notes from one or more instruments. In response to automatically detecting that the user (or a portion of the user) entered the first collision volume, the appropriate audio stem associated with the first collision volume is added to the base audio track or removed from the base audio track.Type: GrantFiled: December 27, 2010Date of Patent: September 1, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Jason Flaks, Rudy Jacobus Poot, Alex Aben-Athar Kipman, Chris Miles, Andrew John Fuller, Jeffrey Neil Margolis
-
Patent number: 9116220Abstract: Techniques are provided for synchronization of sensor signals between devices. One or more of the devices may collect sensor data. The device may create a sensor signal from the sensor data, which it may make available to other devices upon a publisher/subscriber model. The other devices may subscribe to sensor signals they choose. A device could be a provider or a consumer of the sensor signals. A device may have a layer of code between an operating system and software applications that processes the data for the applications. The processing may include such actions as synchronizing the data in a sensor signal to a local time clock, predicting future values for data in a sensor signal, and providing data samples for a sensor signal at a frequency that an application requests, among other actions.Type: GrantFiled: December 27, 2010Date of Patent: August 25, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Shao Liu, Mark Finocchio, Avi Bar-Zeev, Jeffrey Margolis, Jason Flaks, Robert Crocco, Jr., Alex Aben-Athar Kipman
-
Patent number: 9116666Abstract: Techniques are provided for allowing a user to select a region within virtual imagery, such as a hologram, being presented in an HMD. The user could select the region by using their hands to form a closed loop such that from the perspective of the user, the closed loop corresponds to the region the user wishes to select. The user could select the region by using a prop, such as a picture frame. In response to the selection, the selected region could be presented using a different rendering technique than other regions of the virtual imagery. Various rendering techniques such as zooming, filtering, etc. could be applied to the selected region. The identification of the region by the user could also serve as a selection of an element in that portion of the virtual image.Type: GrantFiled: June 1, 2012Date of Patent: August 25, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Tom G. Salter, Alex Aben-Athar Kipman, Ben J. Sugden, Robert L. Crocco, Jr., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Mathew J. Lamb
-
Patent number: 9110504Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.Type: GrantFiled: March 15, 2013Date of Patent: August 18, 2015Assignee: Microsoft Technology Licensing, LLCInventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
-
Patent number: 9105210Abstract: A system for identifying an AR tag and determining a location for a virtual object within an augmented reality environment corresponding with the AR tag is described. In some environments, including those with viewing obstructions, the identity of the AR tag and the location of a corresponding virtual object may be determined by aggregating individual identity and location determinations from a plurality of head-mounted display devices (HMDs). The virtual object may comprise a shared virtual object that is viewable from each of the plurality of HMDs as existing at a shared location within the augmented reality environment. The shared location may comprise a weighted average of individual location determinations from each of the plurality of HMDs. By aggregating and analyzing individual identity and location determinations, a particular HMD of the plurality of HMDs may display a virtual object without having to identify a corresponding AR tag directly.Type: GrantFiled: June 29, 2012Date of Patent: August 11, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, Jr., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman
-
Publication number: 20150220231Abstract: A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.Type: ApplicationFiled: April 15, 2015Publication date: August 6, 2015Inventors: Brian E. Keane, Ben J. Sugden, Robert L. Crocco, Jr., Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Mathew J. Lamb, Alex Aben-Athar Kipman