Patents by Inventor Robert L. Crocco

Robert L. Crocco has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140002491
    Abstract: Techniques are provided for rendering, in a see-through, near-eye mixed reality display, a virtual object within a virtual hole, window or cutout. The virtual hole, window or cutout may appear to be within some real world physical object such as a book, table, etc. The virtual object may appear to be just below the surface of the physical object. In a sense, the virtual world could be considered to be a virtual container that provides developers with additional locations for presenting virtual objects. For example, rather than rendering a virtual object, such as a lamp, in a mixed reality display such that appears to sit on top of a real world desk, the virtual object is rendered such that it appears to be located below the surface of the desk.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Jeffrey Neil Margolis
  • Publication number: 20140002495
    Abstract: A system for identifying an AR tag and determining a location for a virtual object within an augmented reality environment corresponding with the AR tag is described. In some environments, including those with viewing obstructions, the identity of the AR tag and the location of a corresponding virtual object may be determined by aggregating individual identity and location determinations from a plurality of head-mounted display devices (HMDs). The virtual object may comprise a shared virtual object that is viewable from each of the plurality of HMDs as existing at a shared location within the augmented reality environment. The shared location may comprise a weighted average of individual location determinations from each of the plurality of HMDs. By aggregating and analyzing individual identity and location determinations, a particular HMD of the plurality of HMDs may display a virtual object without having to identify a corresponding AR tag directly.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman
  • Publication number: 20130321255
    Abstract: Technology is disclosed herein to help a user navigate through large amounts of content while wearing a see-through, near-eye, mixed reality display device such as a head mounted display (HMD). The user can use a physical object such as a book to navigate through content being presented in the HMD. In one embodiment, a book has markers on the pages that allow the system to organize the content. The book could have real content, but it could be blank other than the markers. As the user flips through the book, the system recognizes the markers and presents content associated with the respective marker in the HMD.
    Type: Application
    Filed: June 5, 2012
    Publication date: December 5, 2013
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman, Sheridan Martin Small, Stephen G. Latta
  • Publication number: 20130321462
    Abstract: Techniques are provided for allowing a user to select a region within virtual imagery, such as a hologram, being presented in an HMD. The user could select the region by using their hands to form a closed loop such that from the perspective of the user, the closed loop corresponds to the region the user wishes to select. The user could select the region by using a prop, such as a picture frame. In response to the selection, the selected region could be presented using a different rendering technique than other regions of the virtual imagery. Various rendering techniques such as zooming, filtering, etc. could be applied to the selected region. The identification of the region by the user could also serve as a selection of an element in that portion of the virtual image.
    Type: Application
    Filed: June 1, 2012
    Publication date: December 5, 2013
    Inventors: Tom G. Salter, Alex Aben-Athar Kipman, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Mathew J. Lamb
  • Publication number: 20130307856
    Abstract: A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.
    Type: Application
    Filed: May 16, 2012
    Publication date: November 21, 2013
    Inventors: Brian E. Keane, Ben J. Sugden, Robert L. Crocco, JR., Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Mathew J. Lamb, Alex Aben-Athar Kipman
  • Publication number: 20130307855
    Abstract: A system for generating and displaying holographic visual aids associated with a story to an end user of a head-mounted display device while the end user is reading the story or perceiving the story being read aloud is described. The story may be embodied within a reading object (e.g., a book) in which words of the story may be displayed to the end user. The holographic visual aids may include a predefined character animation that is synchronized to a portion of the story corresponding with the character being animated. A reading pace of a portion of the story may be used to control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud. In some cases, an existing book without predetermined AR tags may be augmented with holographic visual aids.
    Type: Application
    Filed: May 16, 2012
    Publication date: November 21, 2013
    Inventors: Mathew J. Lamb, Ben J. Sugden, Robert L. Crocco, JR., Brian E. Keane, Christopher E. Miles, Kathryn Stone Perez, Laura K. Massey, Alex Aben-Athar Kipman
  • Publication number: 20130300653
    Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
    Type: Application
    Filed: July 12, 2013
    Publication date: November 14, 2013
    Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
  • Publication number: 20130286178
    Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
    Type: Application
    Filed: March 15, 2013
    Publication date: October 31, 2013
    Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
  • Patent number: 8487838
    Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
    Type: Grant
    Filed: August 30, 2011
    Date of Patent: July 16, 2013
    Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
  • Publication number: 20130154918
    Abstract: Systems, methods, and computer media for estimating user eye gaze are provided. A plurality of images of a user's eye are acquired. At least one image of at least part of the user's field of view is acquired. At least one gaze target area in the user's field of view is determined based on the plurality of images of the user's eye. An enhanced user eye gaze is then estimated by narrowing a database of eye information and corresponding known gaze lines to a subset of the eye information having gaze lines corresponding to a gaze target area. User eye information derived from the images of the user's eye is then compared with the narrowed subset of the eye information, and an enhanced estimated user eye gaze is identified as the known gaze line of a matching eye image.
    Type: Application
    Filed: December 20, 2011
    Publication date: June 20, 2013
    Inventors: BENJAMIN ISAAC VAUGHT, ROBERT L. CROCCO, JR., JOHN LEWIS, JIAN SUN, YICHEN WEI
  • Publication number: 20130114043
    Abstract: The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes.
    Type: Application
    Filed: November 4, 2011
    Publication date: May 9, 2013
    Inventors: Alexandru O. Balan, Ryan L. Hastings, Stephen G. Latta, Michael J. Scavezze, Daniel J. McCulloch, Derek L. Knee, Brian J. Mount, Kevin A. Geisner, Robert L. Crocco
  • Publication number: 20130083173
    Abstract: Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.
    Type: Application
    Filed: June 29, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Michael J. Scavezze, Daniel J. McCulloch, Darren Bennett, Jason Scott, Ryan L. Hastings, Brian E. Keane, Christopher E. Miles, Robert L. Crocco, JR., Mathew J. Lamb
  • Publication number: 20130050642
    Abstract: The technology provides for automatic alignment of a see-through near-eye, mixed reality device with an inter-pupillary distance (IPD). A determination is made as to whether a see-through, near-eye, mixed reality display device is aligned with an IPD of a user. If the display device is not aligned with the IPD, the display device is automatically adjusted. In some examples, the alignment determination is based on determinations of whether an optical axis of each display optical system positioned to be seen through by a respective eye is aligned with a pupil of the respective eye in accordance with an alignment criteria. The pupil alignment may be determined based on an arrangement of gaze detection elements for each display optical system including at least one sensor for capturing data of the respective eye and the captured data. The captured data may be image data, image and glint data, and glint data only.
    Type: Application
    Filed: August 30, 2011
    Publication date: February 28, 2013
    Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Kathryn Stone Perez, Alex Aben-Athar Kipman
  • Publication number: 20130050070
    Abstract: The technology provides various embodiments for gaze determination within a see-through, near-eye, mixed reality display device. In some embodiments, the boundaries of a gaze detection coordinate system can be determined from a spatial relationship between a user eye and gaze detection elements such as illuminators and at least one light sensor positioned on a support structure such as an eyeglasses frame. The gaze detection coordinate system allows for determination of a gaze vector from each eye based on data representing glints on the user eye, or a combination of image and glint data. A point of gaze may be determined in a three-dimensional user field of view including real and virtual objects. The spatial relationship between the gaze detection elements and the eye may be checked and may trigger a re-calibration of training data sets if the boundaries of the gaze detection coordinate system have changed.
    Type: Application
    Filed: August 30, 2011
    Publication date: February 28, 2013
    Inventors: John R. Lewis, Yichen Wei, Robert L. Crocco, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez
  • Publication number: 20130050432
    Abstract: Technology is disclosed for enhancing the experience of a user wearing a see-through, near eye mixed reality display device. Based on an arrangement of gaze detection elements on each display optical system for each eye of the display device, a respective gaze vector is determined and a current user focal region is determined based on the gaze vectors. Virtual objects are displayed at their respective focal regions in a user field of view for a natural sight view. Additionally, one or more objects of interest to a user may be identified. The identification may be based on a user intent to interact with the object. For example, the intent may be determined based on a gaze duration. Augmented content may be projected over or next to an object, real or virtual. Additionally, a real or virtual object intended for interaction may be zoomed in or out.
    Type: Application
    Filed: August 30, 2011
    Publication date: February 28, 2013
    Inventors: Kathryn Stone Perez, Benjamin I. Vaught, John R. Lewis, Robert L. Crocco, Alex Aben-Athar Kipman
  • Publication number: 20130050833
    Abstract: The technology provides for adjusting a see-through, near-eye, mixed reality display device for alignment with an inter-pupillary distance (IPD) of a user by different examples of display adjustment mechanisms. The see-through, near-eye, mixed reality system includes for each eye a display optical system having an optical axis. Each display optical system is positioned to be seen through by a respective eye, and is supported on a respective movable support structure. A display adjustment mechanism attached to the display device also connects with each movable support structure for moving the structure. A determination is automatically made as to whether the display device is aligned with an IPD of a user. If not aligned, one or more adjustment values for a position of at least one of the display optical systems is automatically determined. The display adjustment mechanism moves the at least one display optical system in accordance with the adjustment values.
    Type: Application
    Filed: August 30, 2011
    Publication date: February 28, 2013
    Inventors: John R. Lewis, Kathryn Stone Perez, Robert L. Crocco, Alex Aben-Athar Kipman
  • Publication number: 20120206452
    Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.
    Type: Application
    Filed: April 10, 2012
    Publication date: August 16, 2012
    Inventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, JR.
  • Publication number: 20110316853
    Abstract: Described herein is a telepresence system where a real-time a virtual hologram of a user is displayed at a remote display screen and is rendered from a vantage point that is different than the vantage point from which images of the user are captured via a video camera. The virtual hologram is based at least in part upon data acquired from a sensor unit at the location of the user. The sensor unit includes a color video camera that captures 2-D images of the user including surface features of the user. The sensor unit also includes a depth sensor that captures 3-D geometry data indicative of the relative position of surfaces on the user in 3-D space. The virtual hologram is rendered to orientate the gaze of the eyes of the virtual hologram towards the eyes of a second user viewing the remote display screen.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: Microsoft Corporation
    Inventors: Avi Bar-Zeev, Christian F. Huitema, Alex Aben-Athar Kipman, Robert L. Crocco, JR., John Allen Tardif, Eric G. Lang