Patents by Inventor Mark DOCHTERMANN
Mark DOCHTERMANN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11922636Abstract: An electronic device places an augmented reality object in an image of a real environment based on a pose of the electronic device and based on image segmentation. The electronic device includes a camera that captures images of the real environment and sensors, such as an inertial measurement unit (IMU), that capture a pose of the electronic device. The electronic device selects an augmented reality (AR) object from a memory, segments a captured image of the real environment into foreground pixels and background pixels, and composites an image for display wherein the AR object is placed between the foreground pixels and the background pixels. As the pose of the electronic device changes, the electronic device maintains the relative position of the AR object with respect to the real environment in images for display.Type: GrantFiled: October 9, 2019Date of Patent: March 5, 2024Assignee: GOOGLE LLCInventors: David Bond, Mark Dochtermann
-
Patent number: 11436808Abstract: Disclosed are various embodiments for selecting augmenting reality (AR) objects based on contextual cues associated with an image captured by a camera associated with electronic device. Contextual cues are obtained at an electronic device and AR objects are identified from a memory associated with the electronic device. The electronic device implements a processor employing image segmentation techniques to combine the identified AR objects with the captured image and render the combined image for display at a display associated with the electronic device.Type: GrantFiled: October 9, 2019Date of Patent: September 6, 2022Assignee: GOOGLE LLCInventors: Diane Wang, Paulo Coelho, Tarik Hany Abdel-Gawad, Matthew Gilgenbach, Jackson Lango, Douglas Muir, Mark Dochtermann, Suddhasattwa Bose, Ashley Pinnick, Drew Skillman, Samantha Raja, Steven Toh, Brian Collins, Jay Steele
-
Publication number: 20220012896Abstract: An electronic device places an augmented reality object in an image of a real environment based on a pose of the electronic device and based on image segmentation. The electronic device includes a camera that captures images of the real environment and sensors, such as an inertial measurement unit (IMU), that capture a pose of the electronic device. The electronic device selects an augmented reality (AR) object from a memory, segments a captured image of the real environment into foreground pixels and background pixels, and composites an image for display wherein the AR object is placed between the foreground pixels and the background pixels. As the pose of the electronic device changes, the electronic device maintains the relative position of the AR object with respect to the real environment in images for display.Type: ApplicationFiled: October 9, 2019Publication date: January 13, 2022Inventors: David BOND, Mark DOCHTERMANN
-
Publication number: 20210383609Abstract: Disclosed are various embodiments for selecting augmenting reality (AR) objects based on contextual cues associated with an image captured by a camera associated with electronic device. Contextual cues are obtained at an electronic device and AR objects are identified from a memory associated with the electronic device. The electronic device implements a processor employing image segmentation techniques to combine the identified AR objects with the captured image and render the combined image for display at a display associated with the electronic device.Type: ApplicationFiled: October 9, 2019Publication date: December 9, 2021Inventors: Diane WANG, Paulo COELHO, Tarik Hany ABDEL-GAWAD, Matthew GILGENBACH, Jackson LANGO, Douglas MUIR, Mark DOCHTERMANN, Suddhasattwa BOSE, Ashley PINNICK, Drew SKILLMAN, Samantha RAJA, Steven TOH, Brian COLLINS, Jay STEELE
-
Patent number: 11164367Abstract: Systems and methods for generating illumination effects for inserted luminous content, which may include augmented reality content that appears to emit light and is inserted into an image of a physical space. The content may include a polygonal mesh, which may be defined in part by a skeleton that has multiple joints. Examples may include generating a bounding box on a surface plane for the inserted content, determining an illumination center point location on the surface plane based on the content, generating an illumination entity based on the bounding box and the illumination center point location, and rendering the illumination entity using illumination values determined based on the illumination center point location. Examples may also include determining illumination contributions values for some of the joints, combining the illumination contribution values to generate illumination values for pixels, and rendering another illumination entity using the illumination values.Type: GrantFiled: July 17, 2019Date of Patent: November 2, 2021Assignee: Google LLCInventors: Ivan Neulander, Mark Dochtermann
-
Publication number: 20210019935Abstract: Systems and methods for generating illumination effects for inserted luminous content, which may include augmented reality content that appears to emit light and is inserted into an image of a physical space. The content may include a polygonal mesh, which may be defined in part by a skeleton that has multiple joints. Examples may include generating a bounding box on a surface plane for the inserted content, determining an illumination center point location on the surface plane based on the content, generating an illumination entity based on the bounding box and the illumination center point location, and rendering the illumination entity using illumination values determined based on the illumination center point location. Examples may also include determining illumination contributions values for some of the joints, combining the illumination contribution values to generate illumination values for pixels, and rendering another illumination entity using the illumination values.Type: ApplicationFiled: July 17, 2019Publication date: January 21, 2021Inventors: Ivan Neulander, Mark Dochtermann
-
Publication number: 20200342681Abstract: In a system and method providing for interaction with virtual objects, and interaction between virtual objects, in an augmented reality, or mixed reality, or virtual reality, environment, detected conditions may trigger animated responses from virtual objects placed in a view of a physical environment. The detected conditions may include the detection of a user within a set threshold distance or proximity, the detection of another virtual object within a set threshold placement distance or proximity, the detection of particular environmental conditions in the view of the physical environment, and other such factors. Specific behavioral animations of the virtual objects may be triggered in response to detection of specific virtual objects and/or other conditions.Type: ApplicationFiled: July 7, 2020Publication date: October 29, 2020Inventors: Alan Joyce, Douglas Muir, Mark Dochtermann, Bryan Woods, Tarik Abdel-Gawad
-
Patent number: 10748342Abstract: In a system and method providing for interaction with virtual objects, and interaction between virtual objects, in an augmented reality, or mixed reality, or virtual reality, environment, detected conditions may trigger animated responses from virtual objects placed in a view of a physical environment. The detected conditions may include the detection of a user within a set threshold distance or proximity, the detection of another virtual object within a set threshold placement distance or proximity, the detection of particular environmental conditions in the view of the physical environment, and other such factors. Specific behavioral animations of the virtual objects may be triggered in response to detection of specific virtual objects and/or other conditions.Type: GrantFiled: June 19, 2018Date of Patent: August 18, 2020Assignee: GOOGLE LLCInventors: Alan Joyce, Douglas Muir, Mark Dochtermann, Bryan Woods, Tarik Abdel-Gawad
-
Publication number: 20190385371Abstract: In a system and method providing for interaction with virtual objects, and interaction between virtual objects, in an augmented reality, or mixed reality, or virtual reality, environment, detected conditions may trigger animated responses from virtual objects placed in a view of a physical environment. The detected conditions may include the detection of a user within a set threshold distance or proximity, the detection of another virtual object within a set threshold placement distance or proximity, the detection of particular environmental conditions in the view of the physical environment, and other such factors. Specific behavioral animations of the virtual objects may be triggered in response to detection of specific virtual objects and/or other conditions.Type: ApplicationFiled: June 19, 2018Publication date: December 19, 2019Inventors: Alan Joyce, Douglas Muir, Mark Dochtermann, Bryan Woods, Tarik Abdel-Gawad
-
Publication number: 20170287215Abstract: Systems and methods are described for generating a virtual reality experience including generating a user interface with a plurality of regions on a display in a head-mounted display device. The head-mounted display device housing may include at least one pass-through camera device. The systems and methods can include obtaining image content from the at least one pass-through camera device and displaying a plurality of virtual objects in a first region of the plurality of regions in the user interface, the first region substantially filling a field of view of the display in the head-mounted display device. In response to detecting a change in a head position of a user operating the head-mounted display device, the methods and systems can initiate display of updated image content in a second region of the user interface.Type: ApplicationFiled: March 29, 2016Publication date: October 5, 2017Inventors: Paul Albert LALONDE, Mark DOCHTERMANN, Alexander James FAABORG, Ryan OVERBECK