Patents by Inventor Brian Mullins

Brian Mullins has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200035012
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for adjusting depth of AR content on HUD. A viewing device identifies, based on sensor data, a physical object visible through a transparent display of the vehicle. The sensor data indicates an initial distance of the physical object from the vehicle. The viewing device gathers virtual content corresponding to the physical object and generates an initial presentation of the virtual content based on the initial distance. The viewing device presents the initial presentation of the virtual content on the transparent display at a position on the transparent display corresponding to the physical object. The viewing device determines, based on updated sensor data, an updated distance of the physical object and generates an updated presentation of the virtual content based on the updated distance. The viewing device presents the updated presentation of the virtual content on the transparent display of the vehicle.
    Type: Application
    Filed: July 9, 2019
    Publication date: January 30, 2020
    Inventors: Brian Mullins, Jamieson Christmas
  • Patent number: 10489975
    Abstract: An augmented reality (AR) display application generates mapped visualization content overlaid on a real world physical environment. The AR display application receives sensor feeds, location information, and orientation information from wearable devices within the environment. A tessellation surface is visually mapped to surfaces of the environment based on a depth-based point cloud. A texture is applied to the tessellation surface and the tessellation may be viewed overlaying the surfaces of the environment via a wearable device.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: November 26, 2019
    Assignee: DAQRI, LLC
    Inventors: Erick Mendez, Dominik Schnitzer, Bernhard Jung, Clemens Birklbauer, Kai Zhou, Kiyoung Kim, Daniel Wagner, Roy Lawrence Ashok Inigo, Frank Chester Irving, Jr., Brian Mullins, Lucas Kazansky, Jonathan Trevor Freeman
  • Patent number: 10445937
    Abstract: A first display device determines a first device configuration of the first display device and a first context of a first AR application operating on the first display device. The first display device detects a second display device that comprises a head up display (HUD) system of a vehicle. A second device configuration of the second display device and a second context of a second AR application operating on the second display device are determined. The first display device generates a first collaboration configuration for the first display device and a second collaboration configuration for the second display device. A task of the first AR application is allocated to the second AR application based on the first collaboration configuration. The second display device performs the allocated task based on the second collaboration configuration.
    Type: Grant
    Filed: October 26, 2017
    Date of Patent: October 15, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10379345
    Abstract: An augmented-reality device comprises an optical sensor and a transparent display. The augmented-reality device detects, using the optical sensor, a first physical display located within a display distance of the augmented-reality device. The first physical display is connected to a computer. The augmented-reality device generates a virtual display configured to operate as a second physical display. The computer controls the second physical display. The augmented-reality device displays the virtual display in the transparent display. The virtual display appears adjacent to the first physical display.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: August 13, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10366495
    Abstract: A device for multi-spectrum segmentation for computer vision is described. A first optical sensor operates within a first spectrum range and generates first image data corresponding to a first image captured by the first optical sensor. A second optical sensor operates within a second spectrum range different from the first spectrum range and generates second image data corresponding to a second image captured by the second optical sensor. The device identifies a first region in the first image, maps a first portion of the first image to a second portion of the second image data, provides the second portion of the second image data to a server that generates augmented reality content based on the second portion of the second image data. The device displays the augmented reality content.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: July 30, 2019
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Nalin Senthamil, Eric Douglas Lundquist
  • Patent number: 10347046
    Abstract: A first augmented-reality (AR) device comprises an optical sensor, a geographic location sensor, an orientation sensor, and a display. The first AR device accesses a first geographic location of the first AR device and an orientation of the first AR device and generates a picture taken at the first geographic location of the first AR device and associated with the orientation of the first AR device. The first AR device retrieves, from a server, transportation information from a second AR device in a vehicle. The server assigns the second AR device to the first AR device. The first AR device forms transportation AR content based on the transportation information and displays the transportation AR content in the display based on the first geographic location and orientation of the first AR device, the transportation information, and the picture generated at the first geographic location of the first AR device.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: July 9, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10347030
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for adjusting depth of AR content on HUD. A viewing device identifies, based on sensor data, a physical object visible through a transparent display of the vehicle. The sensor data indicates an initial distance of the physical object from the vehicle. The viewing device gathers virtual content corresponding to the physical object and generates an initial presentation of the virtual content based on the initial distance. The viewing device presents the initial presentation of the virtual content on the transparent display at a position on the transparent display corresponding to the physical object. The viewing device determines, based on updated sensor data, an updated distance of the physical object and generates an updated presentation of the virtual content based on the updated distance. The viewing device presents the updated presentation of the virtual content on the transparent display of the vehicle.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: July 9, 2019
    Assignee: Envisics Ltd
    Inventors: Brian Mullins, Jamieson Christmas
  • Publication number: 20190197876
    Abstract: Techniques of tracking a user's location are disclosed. In some embodiments, a mobile device captures first sensor data using at least one sensor, determines that a predetermined hazard criteria is not satisfied by an environment of a user of the mobile device, suppresses transmission of a representation of the captured first sensor data to a remote computing device based on the determination that the predetermined hazard criteria is not satisfied, captures second sensor data using the sensor(s), determines that the predetermined hazard criteria is satisfied by the environment of the user, and transmits a representation of the captured second sensor data to the remote computing device based on the determination that the predetermined hazard criteria is satisfied by the environment of the user.
    Type: Application
    Filed: December 22, 2017
    Publication date: June 27, 2019
    Inventors: Brian Mullins, Christopher Broaddus
  • Publication number: 20190147663
    Abstract: A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
    Type: Application
    Filed: January 16, 2019
    Publication date: May 16, 2019
    Inventor: Brian Mullins
  • Publication number: 20190147587
    Abstract: A mobile device identifies a user task provided by an augmented reality application at a mobile device. The mobile device identifies a first physical tool valid for performing the user task from a tool compliance library based on the user task. The mobile device detects and identifies a second physical tool present at the mobile device. The mobile device determines whether the second physical tool matches the first physical tool. The mobile device display augmented reality content that identifies at least one of a missing physical tool, an unmatched physical tool, or a matched physical tool based on whether the second physical tool matches the first physical tool.
    Type: Application
    Filed: January 14, 2019
    Publication date: May 16, 2019
    Inventor: Brian Mullins
  • Publication number: 20190124391
    Abstract: A server receives, from a first display device of a first user, first content data, first sensor data, and a request for assistance identifying a context of the first display device. The server identifies a second display device of a second user based on the context of the first display device. The server receives second content data and second sensor data from the second display device. The first content data is synchronized with the second content data based on the first and second sensor data. Playback parameters are formed based on the context of the first display device. An enhanced playback session is generated using the synchronized first and second content data in response to determining that the first sensor data meet the playback parameters. The enhanced playback session is communicated to the first display device.
    Type: Application
    Filed: December 17, 2018
    Publication date: April 25, 2019
    Inventor: Brian Mullins
  • Publication number: 20190068529
    Abstract: A head-mounted device (HMD) of a first user has a transparent display. The HMD determines location information of a second user relative to the HMD of the first user. The second user is located within a predefined distance of the HMD. The location information identifies a distance and a direction of the second user relative to the HMD. The HMD receives audio content from the second user, generates augmented reality (AR) content based on the audio content, and displays the AR content in the transparent display based on the location information of the second user. The AR content appears coupled to the second user.
    Type: Application
    Filed: August 31, 2017
    Publication date: February 28, 2019
    Inventor: Brian Mullins
  • Patent number: 10217209
    Abstract: A mobile device identifies a user task provided by an augmented reality application at a mobile device. The mobile device identifies a first physical tool valid for performing the user task from a tool compliance library based on the user task. The mobile device detects and identifies a second physical tool present at the mobile device. The mobile device determines whether the second physical tool matches the first physical tool. The mobile device display augmented reality content that identifies at least one of a missing physical tool, an unmatched physical tool, or a matched physical tool based on whether the second physical tool matches the first physical tool.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: February 26, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10210663
    Abstract: A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: February 19, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10198869
    Abstract: A remote expert application identifies a manipulation of virtual objects displayed in a first wearable device. The virtual objects are rendered based a physical object viewed with a second wearable device. A manipulation of the virtual objects is received from the first wearable device. A visualization of the manipulation of the virtual objects is generated for a display of the second wearable device. The visualization of the manipulation of the virtual objects is communicated to the second wearable device.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: February 5, 2019
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Christopher Broaddus
  • Publication number: 20190025757
    Abstract: A device (200,300) forms steerable plasma (222, 310) using a laser source (110) and a LCOS-SLM, Liquid Crystal on Silicon Spatial Light Modulator (112). The device generates a laser control signal and a LCOS-SLM (Liquid Crystal on Silicon Spatial Light Modulator) control signal. The laser source generates a plurality of incident laser beams based on the laser control signal. The LCOS-SLM receives the plurality of incident laser beams, modulates the plurality of incident laser beams based on the LCOS-SLM control signal to form a plurality of holographic wavefronts. Each holographic wavefront forms at least one corresponding focal point. The LCOS-SLM forms plasma at interference points of the focal points of the plurality of holographic wavefronts.
    Type: Application
    Filed: December 22, 2016
    Publication date: January 24, 2019
    Inventor: Brian Mullins
  • Publication number: 20190025583
    Abstract: A display device accesses holographic data corresponding to an image. A laser source generates a laser light. A SLM (Spatial Light Modulator) receives the laser light and modulate the laser light based on the holographic data of the image. A partially transparent reflective lens receives the modulated laser light and reflects the modulated laser light towards an eye of a user of the display device. A holographic image is formed in the form of a reconstructed wavefront based on the modulated laser light. The partially transparent reflective lens is disposed adjacent to the eye of the user.
    Type: Application
    Filed: December 22, 2016
    Publication date: January 24, 2019
    Inventor: Brian Mullins
  • Patent number: 10187686
    Abstract: A server receives, from a first display device of a first user, first content data, first sensor data, and a request for assistance identifying a context of the first display device. The server identifies a second display device of a second user based on the context of the first display device. The server receives second content data and second sensor data from the second display device. The first content data is synchronized with the second content data based on the first and second sensor data. Playback parameters are formed based on the context of the first display device. An enhanced playback session is generated using the synchronized first and second content data in response to determining that the first sensor data meet the playback parameters. The enhanced playback session is communicated to the first display device.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: January 22, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Publication number: 20190004476
    Abstract: A printing device (106) includes a laser source (110) and a LCOS-SLM (Liquid Crystal on Silicon Spatial Light (Modulator, 112). The printing device generates a laser control signal and a LCOS-SLM control signal. The laser source generates a plurality of incident laser beams based on the laser control signal. The LCOS-SLM receives the plurality of incident laser beams, modulates the plurality of incident laser beams based on the LCOS-SLM control signal, and generates a plurality of holographic wavefronts (214, 216). Each holographic wavefront forms at least one focal point. The printing device cures a surface layer of a target material (206) at interference points of focal points of the plurality of holographic wavefronts. The cured surface layer of the target material forms a two-dimensional printed content.
    Type: Application
    Filed: December 22, 2016
    Publication date: January 3, 2019
    Applicant: Dualitas Ltd
    Inventors: Brian Mullins, Jamieson Christmas
  • Patent number: D850444
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 4, 2019
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Roy Lawrence Ashok Inigo, David Hayes, Ryan Ries, Douglas Rieck, Arash Kalantari, Siamak Sepahram, Cassie Li