Patents by Inventor Chris McKenzie
Chris McKenzie has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10803663Abstract: A method for depth sensor aided estimation of virtual reality environment boundaries includes generating depth data at a depth sensor of an electronic device based on a local environment proximate the electronic device. A set of initial boundary data is estimated based on the depth data, wherein the set of initial boundary data defines an exterior boundary of a virtual bounded floor plan. The virtual bounded floor plan is generated based at least in part on the set of initial boundary data. Additionally, a relative pose of the electronic device within the virtual bounded floor plan is determined and a collision warning is displayed on a display of the electronic device based on the relative pose.Type: GrantFiled: August 2, 2017Date of Patent: October 13, 2020Assignee: GOOGLE LLCInventors: Zhaoguang Wang, Mugur Marculescu, Chris McKenzie, Ambrus Csaszar, Ivan Dryanovski
-
Patent number: 10795449Abstract: Methods and apparatus using gestures to share private windows in shared virtual environments are disclosed herein. An example method includes detecting a gesture of a user in a virtual environment associated with a private window in the virtual environment, the private window associated with the user, determining whether the gesture represents a signal to share the private window with another, and, when the gesture represents a signal to share the private window, changing the status of the private window to a shared window.Type: GrantFiled: December 9, 2016Date of Patent: October 6, 2020Assignee: GOOGLE LLCInventors: Alexander James Faaborg, Chris McKenzie
-
Patent number: 10606344Abstract: In a system for dynamic switching and merging of head, gesture and touch input in virtual reality, focus may be set on a first virtual in response to a first input implementing one of a number of different input modes. The first object may then be manipulated in the virtual world in response to a second input implementing another input mode. In response to a third input, focus may be shifted from the first object to a second object if, for example, a priority value of the third input is higher than a priority value of the first input. If the priority value of the third input is less than that of the first input, focus may remain on the first object. In response to certain trigger inputs, a display of virtual objects may be shifted between a far field display and a near field display to accommodate a particular mode of interaction with the virtual objects.Type: GrantFiled: September 13, 2018Date of Patent: March 31, 2020Assignee: GOOGLE LLCInventors: Alexander James Faaborg, Manuel Christian Clement, Chris McKenzie
-
Patent number: 10496156Abstract: A system and method of operating an audio visual system generating an immersive virtual experience may include generating, by a head-mounted audio visual device, a virtual world immersive experience within a virtual space while physically moving within a physical space, displaying, by the head-mounted audio visual device within the virtual space, a visual target marker indicating a target location in the physical space, receiving, by the head-mounted audio visual device, a teleport control signal, and moving a virtual location of the head-mounted audio visual device within the virtual space from a first virtual location to a second virtual location in response to receiving the teleport control signal.Type: GrantFiled: May 15, 2017Date of Patent: December 3, 2019Assignee: GOOGLE LLCInventors: Robbie Tilton, Robert Carl Jagnow, Alexander James Faaborg, Chris McKenzie
-
Patent number: 10347053Abstract: Techniques disclosed herein involve adaptively or dynamically displaying virtual objects in a virtual reality (VR) environment, and representations, within the VR environment, of physical objects in the physical environment, i.e., outside the VR environment, in order to alert users within the VR environment. For example, if a projected movement of a user indicates that the user will move close to a physical object in the physical world, the representation of the physical object changes from an un-displayed state, in which the physical object is not visible in the VR environment, to a displayed state in which the physical object is at least partially depicted inside the VR environment. In this way, what is displayed inside the VR environment can include both virtual objects as well representations of physical objects from the physical space.Type: GrantFiled: May 17, 2017Date of Patent: July 9, 2019Assignee: GOOGLE LLCInventors: Chris McKenzie, Adam Glazier, Clayton Woodward Bavor, Jr.
-
Patent number: 10334076Abstract: In a system for pairing a first device and a second device in a virtual reality environment, the first device may be a sending device, and the second device may be a receiving device. The sending device may transmit an electromagnetic signal that is received by the receiving device. The receiving device may process the electromagnetic signal to verify physical proximity of the receiving device and the transmitting device, and extract identification information related to the sending device for pairing. The receiving device may display one or more virtual pairing indicators to be manipulated to verify the user's intention to pair the first and second devices.Type: GrantFiled: February 21, 2017Date of Patent: June 25, 2019Assignee: GOOGLE LLCInventors: Chris McKenzie, Murphy Stein, Alan Browning
-
Publication number: 20190179497Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) for a computing device, such as head-mountable device (HMD). The UI allows a user of the HMD to navigate through a timeline of ordered screens or cards shown on the graphic display of the HMD. The cards on the timeline may be chronologically ordered based on times associated with each card. Numerous cards may be added to the timeline such that a user may scroll through the timeline to search for a specific card. The HMD may be configured to group cards on the timeline. The cards may be grouped by multiple time periods and by various content types within each respective time period. The cards may also be grouped based on durations between the present/on-going time period and each respective time period.Type: ApplicationFiled: February 14, 2019Publication date: June 13, 2019Inventors: Chris McKenzie, Antonio Bernardo Monteiro Costa, Richard Dennis The
-
Combining gaze input and touch surface input for user interfaces in augmented and/or virtual reality
Patent number: 10275023Abstract: In a virtual reality system, an optical tracking device may detect and track a user's eye gaze direction and/or movement, and/or sensors may detect and track a user's head gaze direction and/or movement, relative to virtual user interfaces displayed in a virtual environment. A processor may process the detected gaze direction and/or movement as a user input, and may translate the user input into a corresponding interaction in the virtual environment. Gaze directed swipes on a virtual keyboard displayed in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller. The user may also interact with other types of virtual interfaces in the virtual environment using gaze direction and movement to provide an input, either alone or together with a controller input.Type: GrantFiled: December 21, 2016Date of Patent: April 30, 2019Assignee: GOOGLE LLCInventors: Chris McKenzie, Chun Yat Frank Li, Hayes S. Raffle -
Patent number: 10254923Abstract: Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) for a computing device, such as head-mountable device (HMD). The UI allows a user of the HMD to navigate through a timeline of ordered screens or cards shown on the graphic display of the HMD. The cards on the timeline may be chronologically ordered based on times associated with each card. Numerous cards may be added to the timeline such that a user may scroll through the timeline to search for a specific card. The HMD may be configured to group cards on the timeline. The cards may be grouped by multiple time periods and by various content types within each respective time period. The cards may also be grouped based on durations between the present/on-going time period and each respective time period.Type: GrantFiled: December 18, 2015Date of Patent: April 9, 2019Assignee: Google LLCInventors: Chris McKenzie, Antonio Bernardo Monteiro Costa, Richard Dennis The
-
Publication number: 20190043259Abstract: A method for depth sensor aided estimation of virtual reality environment boundaries includes generating depth data at a depth sensor of an electronic device based on a local environment proximate the electronic device. A set of initial boundary data is estimated based on the depth data, wherein the set of initial boundary data defines an exterior boundary of a virtual bounded floor plan. The virtual bounded floor plan is generated based at least in part on the set of initial boundary data. Additionally, a relative pose of the electronic device within the virtual bounded floor plan is determined and a collision warning is displayed on a display of the electronic device based on the relative pose.Type: ApplicationFiled: August 2, 2017Publication date: February 7, 2019Inventors: Zhaoguang Wang, Mugur Marculescu, Chris McKenzie, Ambrus Csaszar, Ivan Dryanovski
-
Publication number: 20190033989Abstract: A method for generating virtual reality environment boundaries includes receiving, at a depth sensor of an electronic device, depth data from a local environment proximate the electronic device. Further, a set of outer boundary data is received defines an exterior boundary of a virtual bounded floor plan. A virtual bounded floor plan is generated based at least in part on the set of outer boundary data and the depth data from the local environment. Further, a relative pose of the electronic device within the virtual bounded floor plan is determined and collision warning is displayed on a display of the electronic device based on the relative pose.Type: ApplicationFiled: July 31, 2017Publication date: January 31, 2019Inventors: Zhaoguang Wang, Mugur Marculescu, Chris McKenzie, Ambrus Csaszar, Ivan Dryanovski
-
Publication number: 20190011979Abstract: In a system for dynamic switching and merging of head, gesture and touch input in virtual reality, focus may be set on a first virtual in response to a first input implementing one of a number of different input modes. The first object may then be manipulated in the virtual world in response to a second input implementing another input mode. In response to a third input, focus may be shifted from the first object to a second object if, for example, a priority value of the third input is higher than a priority value of the first input. If the priority value of the third input is less than that of the first input, focus may remain on the first object. In response to certain trigger inputs, a display of virtual objects may be shifted between a far field display and a near field display to accommodate a particular mode of interaction with the virtual objects.Type: ApplicationFiled: September 13, 2018Publication date: January 10, 2019Inventors: Alexander James Faaborg, Manuel Christian Clement, Chris McKenzie
-
Patent number: 10101803Abstract: In a system for dynamic switching and merging of head, gesture and touch input in virtual reality, a virtual object may be selected by a user in response to a first input implementing one of a number of different input modes. Once selected, with focus established on the first object by the first input, the first object may be manipulated in the virtual world in response to a second input implementing another of the different input modes. In response to a third input, another object may be selected, and focus may be shifted from the first object to the second object in response to a third input if, for example, a priority value of the third input is higher than a priority value of the first input that established focus on the first object. If the priority value of the third input is less than the priority value of the first input that established focus on the first object, focus may remain on the first object.Type: GrantFiled: August 26, 2015Date of Patent: October 16, 2018Assignee: Google LLCInventors: Alexander James Faaborg, Manuel Christian Clement, Chris McKenzie
-
Publication number: 20170337750Abstract: Techniques disclosed herein involve adaptively or dynamically displaying virtual objects in a virtual reality (VR) environment, and representations, within the VR environment, of physical objects in the physical environment, i.e., outside the VR environment, in order to alert users within the VR environment. For example, if a projected movement of a user indicates that the user will move close to a physical object in the physical world, the representation of the physical object changes from an un-displayed state, in which the physical object is not visible in the VR environment, to a displayed state in which the physical object is at least partially depicted inside the VR environment. In this way, what is displayed inside the VR environment can include both virtual objects as well representations of physical objects from the physical space.Type: ApplicationFiled: May 17, 2017Publication date: November 23, 2017Inventors: Chris McKenzie, Adam Glazier, Clayton Woodward Bavor, JR.
-
Publication number: 20170336863Abstract: A system and method of operating an audio visual system generating an immersive virtual experience may include generating, by a head-mounted audio visual device, a virtual world immersive experience within a virtual space while physically moving within a physical space, displaying, by the head-mounted audio visual device within the virtual space, a visual target marker indicating a target location in the physical space, receiving, by the head-mounted audio visual device, a teleport control signal, and moving a virtual location of the head-mounted audio visual device within the virtual space from a first virtual location to a second virtual location in response to receiving the teleport control signal.Type: ApplicationFiled: May 15, 2017Publication date: November 23, 2017Inventors: Robbie TILTON, Robert Carl JAGNOW, Alexander James FAABORG, Chris MCKENZIE
-
COMBINING GAZE INPUT AND TOUCH SURFACE INPUT FOR USER INTERFACES IN AUGMENTED AND/OR VIRTUAL REALITY
Publication number: 20170322623Abstract: In a virtual reality system, an optical tracking device may detect and track a user's eye gaze direction and/or movement, and/or sensors may detect and track a user's head gaze direction and/or movement, relative to virtual user interfaces displayed in a virtual environment. A processor may process the detected gaze direction and/or movement as a user input, and may translate the user input into a corresponding interaction in the virtual environment. Gaze directed swipes on a virtual keyboard displayed in the virtual environment may be detected and tracked, and translated into a corresponding text input, either alone or together with user input(s) received by the controller. The user may also interact with other types of virtual interfaces in the virtual environment using gaze direction and movement to provide an input, either alone or together with a controller input.Type: ApplicationFiled: December 21, 2016Publication date: November 9, 2017Inventors: Chris McKenzie, Chun Yat Frank Li, Hayes S. Raffle -
Publication number: 20170244811Abstract: In a system for pairing a first device and a second device in a virtual reality environment, the first device may be a sending device, and the second device may be a receiving device. The sending device may transmit an electromagnetic signal that is received by the receiving device. The receiving device may process the electromagnetic signal to verify physical proximity of the receiving device and the transmitting device, and extract identification information related to the sending device for pairing. The receiving device may display one or more virtual pairing indicators to be manipulated to verify the user's intention to pair the first and second devices.Type: ApplicationFiled: February 21, 2017Publication date: August 24, 2017Inventors: Chris MCKENZIE, Murphy STEIN, Alan BROWNING
-
Publication number: 20170168585Abstract: Methods and apparatus using gestures to share private windows in shared virtual environments are disclosed herein. An example method includes detecting a gesture of a user in a virtual environment associated with a private window in the virtual environment, the private window associated with the user, determining whether the gesture represents a signal to share the private window with another, and, when the gesture represents a signal to share the private window, changing the status of the private window to a shared window.Type: ApplicationFiled: December 9, 2016Publication date: June 15, 2017Inventors: Alexander James FAABORG, Chris McKENZIE
-
Publication number: 20170103574Abstract: A system and method of operating an audio visual system generating an immersive virtual experience may detect when a user approaches a physical boundary of a real world space, and may generate an alert indicating the proximity of the physical boundary. Activity in and interaction with the immersive virtual experience may be temporarily paused as the user completes a physical re-orientation in the real world space in response to the alert. Upon detection of completion of the physical re-orientation in the real world space, activity in and interaction with the immersive virtual experience may resume at the point at which activity was temporarily paused. This may provide for relatively continuous movement in the immersive virtual experience within the boundaries of the real world space.Type: ApplicationFiled: October 13, 2015Publication date: April 13, 2017Inventors: Alexander James Faaborg, Chris McKenzie, Adam Glazier
-
Patent number: 9607440Abstract: In one aspect, an HMD is disclosed that provides a technique for generating a composite image representing the view of a wearer of the HMD. The HMD may include a display and a front-facing camera, and may be configured to perform certain functions. For instance, the HMD may be configured to make a determination that a trigger event occurred and responsively both generate a first image that is indicative of content displayed on the display, and cause the camera to capture a second image that is indicative of a real-world field-of-view associated with the HMD. Further, the HMD may be configured to generate a composite image that combines the generated first image and the captured second image.Type: GrantFiled: September 7, 2016Date of Patent: March 28, 2017Assignee: Google Inc.Inventors: Richard The, Max Benjamin Braun, Chris McKenzie, Alexander Hanbing Chen