Patents by Inventor Daniel J. McCulloch

Daniel J. McCulloch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130328927
    Abstract: A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment.
    Type: Application
    Filed: November 29, 2012
    Publication date: December 12, 2013
    Inventors: Brian J. Mount, Jason Scott, Ryan L. Hastings, Darren Bennett, Stephen G. Latta, Daniel J. McCulloch, Kevin A. Geisner, Jonathan T. Steed, Michael J. Scavezze
  • Publication number: 20130328925
    Abstract: A system and method are disclosed for interpreting user focus on virtual objects in a mixed reality environment. Using inference, express gestures and heuristic rules, the present system determines which of the virtual objects the user is likely focused on and interacting with. At that point, the present system may emphasize the selected virtual object over other virtual objects, and interact with the selected virtual object in a variety of ways.
    Type: Application
    Filed: June 12, 2012
    Publication date: December 12, 2013
    Inventors: Stephen G. Latta, Adam G. Poulos, Daniel J. McCulloch, Jeffrey Cole, Wei Zhang
  • Publication number: 20130328762
    Abstract: Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data.
    Type: Application
    Filed: June 12, 2012
    Publication date: December 12, 2013
    Inventors: Daniel J. McCulloch, Arnulfo Zepeda Navratil, Jonathan T. Steed, Ryan L. Hastings, Jason Scott, Brian J. Mount, Holly A. Hirzel, Darren Bennett, Michael J. Scavezze
  • Publication number: 20130326364
    Abstract: A system and method are disclosed for positioning and sizing virtual objects in a mixed reality environment in a way that is optimal and most comfortable for a user to interact with the virtual objects.
    Type: Application
    Filed: May 31, 2012
    Publication date: December 5, 2013
    Inventors: Stephen G. Latta, Adam G. Poulos, Daniel J. McCulloch, Wei Zhang
  • Publication number: 20130321390
    Abstract: A system and method are disclosed for augmenting a reading experience in a mixed reality environment. In response to predefined verbal or physical gestures, the mixed reality system is able to answer a user's questions or provide additional information relating to what the user is reading. Responses may be displayed to the user on virtual display slates in a border or around the reading material without obscuring text or interfering with the user's reading experience.
    Type: Application
    Filed: May 31, 2012
    Publication date: December 5, 2013
    Inventors: Stephen G. Latta, Ryan L. Hastings, Cameron G. Brown, Aaron Krauss, Daniel J. McCulloch, Ben J. Sugden
  • Publication number: 20130286004
    Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.
    Type: Application
    Filed: April 27, 2012
    Publication date: October 31, 2013
    Inventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis
  • Publication number: 20130282345
    Abstract: A system for generating and updating a 3D model of a structure as the structure is being constructed or modified is described. The structure may comprise a building or non-building structure such as a bridge, parking garage, or roller coaster. The 3D model may include virtual objects depicting physical components or other construction elements of the structure. Each construction element may be associated with physical location information that may be analyzed over time in order to detect movement of the construction element and to predict when movement of the construction element may cause a code or regulation to be violated. In some cases, a see-through HMD may be utilized by a construction worker while constructing or modifying a structure in order to verify that the placement of a construction element complies with various building codes or regulations in real-time.
    Type: Application
    Filed: April 24, 2012
    Publication date: October 24, 2013
    Inventors: Daniel J. McCulloch, Ryan L. Hastings, Jason Scott, Holly A. Hirzel, Brian J. Mount
  • Publication number: 20130137076
    Abstract: Technology disclosed herein provides for use of HMDs in a classroom setting. Technology disclosed herein provides for HMD use for holographic instruction. In one embodiment, the HMD is used for social coaching. User profile information may be used to tailor instruction to a specific user based on known skills, learning styles, and/or characteristics. One or more individuals may be monitored based on sensor data. The sensor data may come from an HMD. The monitoring may be analyzed to determine how to enhance an experience. The experience may be enhanced by presenting an image in at least one head mounted display worn by the one or more individuals.
    Type: Application
    Filed: November 30, 2011
    Publication date: May 30, 2013
    Inventors: Kathryn Stone Perez, Kevin A. Geisner, Ben J. Sugden, Daniel J. McCulloch, John Clavin
  • Publication number: 20130114043
    Abstract: The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes.
    Type: Application
    Filed: November 4, 2011
    Publication date: May 9, 2013
    Inventors: Alexandru O. Balan, Ryan L. Hastings, Stephen G. Latta, Michael J. Scavezze, Daniel J. McCulloch, Derek L. Knee, Brian J. Mount, Kevin A. Geisner, Robert L. Crocco
  • Publication number: 20130083173
    Abstract: Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.
    Type: Application
    Filed: June 29, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Michael J. Scavezze, Daniel J. McCulloch, Darren Bennett, Jason Scott, Ryan L. Hastings, Brian E. Keane, Christopher E. Miles, Robert L. Crocco, JR., Mathew J. Lamb
  • Publication number: 20130083008
    Abstract: A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
    Type: Application
    Filed: March 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Darren Bennett, Daniel J. McCulloch, John Clavin, Jennifer A. Karr, Adam G. Poulos, Brian J. Mount
  • Publication number: 20130083062
    Abstract: A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
    Type: Application
    Filed: March 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Darren Bennett, Daniel J. McCulloch, John Clavin, Jason Scott
  • Publication number: 20130083064
    Abstract: Technology is described for resource management based on data including image data of a resource captured by at least one capture device of at least one personal audiovisual (A/V) apparatus including a near-eye, augmented reality (AR) display. A resource is automatically identified from image data captured by at least one capture device of at least one personal A/V apparatus and object reference data. A location in which the resource is situated and a 3D space position or volume of the resource in the location is tracked. A property of the resource is also determined from the image data and tracked. A function of a resource may also be stored for determining whether the resource is usable for a task. Responsive to notification criteria for the resource being satisfied, image data related to the resource is displayed on the near-eye AR display.
    Type: Application
    Filed: June 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Jeffrey A. Kohler, Daniel J. McCulloch
  • Publication number: 20130083007
    Abstract: A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
    Type: Application
    Filed: March 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Daniel J. McCulloch, Arthur C. Tomlin, Jennifer A. Karr
  • Publication number: 20130044130
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Application
    Filed: January 30, 2012
    Publication date: February 21, 2013
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Publication number: 20120206452
    Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.
    Type: Application
    Filed: April 10, 2012
    Publication date: August 16, 2012
    Inventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, JR.