Patents by Inventor Brian Mullins

Brian Mullins has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10300377
    Abstract: A method of providing virtual items based on location-based action is disclosed. An indication is received of a performance of a location-based action by a player of a computer-implemented game. A virtual item is provided based on the receiving of the indication of the performance of the location-based action by the player of the computer-implemented game. The virtual item may be usable within the computer-implemented game.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: May 28, 2019
    Assignee: Zynga Inc.
    Inventors: Kathleen Auterio, Deniz Ersever, Nathan Arthur Etter, Hardik Kheskani, Serena Lam, Amitt Mahajan, Christopher Joseph Makarsky, Jay Monahan, Donald C. Mosites, Benjamin Mullin, Matthew Adam Ocko, Brian Reynolds, Shantanu Talapatra, Justin Waldron, Ian Wang, Jackson Wang
  • Publication number: 20190147663
    Abstract: A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
    Type: Application
    Filed: January 16, 2019
    Publication date: May 16, 2019
    Inventor: Brian Mullins
  • Publication number: 20190147587
    Abstract: A mobile device identifies a user task provided by an augmented reality application at a mobile device. The mobile device identifies a first physical tool valid for performing the user task from a tool compliance library based on the user task. The mobile device detects and identifies a second physical tool present at the mobile device. The mobile device determines whether the second physical tool matches the first physical tool. The mobile device display augmented reality content that identifies at least one of a missing physical tool, an unmatched physical tool, or a matched physical tool based on whether the second physical tool matches the first physical tool.
    Type: Application
    Filed: January 14, 2019
    Publication date: May 16, 2019
    Inventor: Brian Mullins
  • Publication number: 20190124391
    Abstract: A server receives, from a first display device of a first user, first content data, first sensor data, and a request for assistance identifying a context of the first display device. The server identifies a second display device of a second user based on the context of the first display device. The server receives second content data and second sensor data from the second display device. The first content data is synchronized with the second content data based on the first and second sensor data. Playback parameters are formed based on the context of the first display device. An enhanced playback session is generated using the synchronized first and second content data in response to determining that the first sensor data meet the playback parameters. The enhanced playback session is communicated to the first display device.
    Type: Application
    Filed: December 17, 2018
    Publication date: April 25, 2019
    Inventor: Brian Mullins
  • Publication number: 20190068529
    Abstract: A head-mounted device (HMD) of a first user has a transparent display. The HMD determines location information of a second user relative to the HMD of the first user. The second user is located within a predefined distance of the HMD. The location information identifies a distance and a direction of the second user relative to the HMD. The HMD receives audio content from the second user, generates augmented reality (AR) content based on the audio content, and displays the AR content in the transparent display based on the location information of the second user. The AR content appears coupled to the second user.
    Type: Application
    Filed: August 31, 2017
    Publication date: February 28, 2019
    Inventor: Brian Mullins
  • Patent number: 10217209
    Abstract: A mobile device identifies a user task provided by an augmented reality application at a mobile device. The mobile device identifies a first physical tool valid for performing the user task from a tool compliance library based on the user task. The mobile device detects and identifies a second physical tool present at the mobile device. The mobile device determines whether the second physical tool matches the first physical tool. The mobile device display augmented reality content that identifies at least one of a missing physical tool, an unmatched physical tool, or a matched physical tool based on whether the second physical tool matches the first physical tool.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: February 26, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10210663
    Abstract: A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: February 19, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 10198869
    Abstract: A remote expert application identifies a manipulation of virtual objects displayed in a first wearable device. The virtual objects are rendered based a physical object viewed with a second wearable device. A manipulation of the virtual objects is received from the first wearable device. A visualization of the manipulation of the virtual objects is generated for a display of the second wearable device. The visualization of the manipulation of the virtual objects is communicated to the second wearable device.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: February 5, 2019
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Christopher Broaddus
  • Publication number: 20190025757
    Abstract: A device (200,300) forms steerable plasma (222, 310) using a laser source (110) and a LCOS-SLM, Liquid Crystal on Silicon Spatial Light Modulator (112). The device generates a laser control signal and a LCOS-SLM (Liquid Crystal on Silicon Spatial Light Modulator) control signal. The laser source generates a plurality of incident laser beams based on the laser control signal. The LCOS-SLM receives the plurality of incident laser beams, modulates the plurality of incident laser beams based on the LCOS-SLM control signal to form a plurality of holographic wavefronts. Each holographic wavefront forms at least one corresponding focal point. The LCOS-SLM forms plasma at interference points of the focal points of the plurality of holographic wavefronts.
    Type: Application
    Filed: December 22, 2016
    Publication date: January 24, 2019
    Inventor: Brian Mullins
  • Publication number: 20190025583
    Abstract: A display device accesses holographic data corresponding to an image. A laser source generates a laser light. A SLM (Spatial Light Modulator) receives the laser light and modulate the laser light based on the holographic data of the image. A partially transparent reflective lens receives the modulated laser light and reflects the modulated laser light towards an eye of a user of the display device. A holographic image is formed in the form of a reconstructed wavefront based on the modulated laser light. The partially transparent reflective lens is disposed adjacent to the eye of the user.
    Type: Application
    Filed: December 22, 2016
    Publication date: January 24, 2019
    Inventor: Brian Mullins
  • Patent number: 10187686
    Abstract: A server receives, from a first display device of a first user, first content data, first sensor data, and a request for assistance identifying a context of the first display device. The server identifies a second display device of a second user based on the context of the first display device. The server receives second content data and second sensor data from the second display device. The first content data is synchronized with the second content data based on the first and second sensor data. Playback parameters are formed based on the context of the first display device. An enhanced playback session is generated using the synchronized first and second content data in response to determining that the first sensor data meet the playback parameters. The enhanced playback session is communicated to the first display device.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: January 22, 2019
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Publication number: 20190004476
    Abstract: A printing device (106) includes a laser source (110) and a LCOS-SLM (Liquid Crystal on Silicon Spatial Light (Modulator, 112). The printing device generates a laser control signal and a LCOS-SLM control signal. The laser source generates a plurality of incident laser beams based on the laser control signal. The LCOS-SLM receives the plurality of incident laser beams, modulates the plurality of incident laser beams based on the LCOS-SLM control signal, and generates a plurality of holographic wavefronts (214, 216). Each holographic wavefront forms at least one focal point. The printing device cures a surface layer of a target material (206) at interference points of focal points of the plurality of holographic wavefronts. The cured surface layer of the target material forms a two-dimensional printed content.
    Type: Application
    Filed: December 22, 2016
    Publication date: January 3, 2019
    Applicant: Dualitas Ltd
    Inventors: Brian Mullins, Jamieson Christmas
  • Publication number: 20180365893
    Abstract: A first augmented-reality (AR) device comprises an optical sensor, a geographic location sensor, an orientation sensor, and a display. The first AR device accesses a first geographic location of the first AR device and an orientation of the first AR device and generates a picture taken at the first geographic location of the first AR device and associated with the orientation of the first AR device. The first AR device retrieves, from a server, transportation information from a second AR device in a vehicle. The server assigns the second AR device to the first AR device. The first AR device forms transportation AR content based on the transportation information and displays the transportation AR content in the display based on the first geographic location and orientation of the first AR device, the transportation information, and the picture generated at the first geographic location of the first AR device.
    Type: Application
    Filed: June 16, 2017
    Publication date: December 20, 2018
    Inventor: Brian Mullins
  • Publication number: 20180354509
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for presenting an AR visualization of an ADAS. A viewing device integrated into a vehicle gathers sensor data from one or more sensors. The sensor data describes a speed and trajectory of the vehicle and a geographic location of a physical object in relation to the vehicle. The viewing device determines, based on the sensor data, that a threat level of the vehicle colliding with the physical object meets or exceeds a threshold threat level. In response to determining that the threat level of the vehicle colliding with the physical object meets or exceeds the threshold threat level, the viewing device determines an automated movement of the vehicle to avoid colliding with the physical object. The viewing device then presents, on a heads up display of the vehicle, virtual content depicting the automated movement, and executes the automated movement.
    Type: Application
    Filed: June 8, 2017
    Publication date: December 13, 2018
    Inventor: Brian Mullins
  • Patent number: 10147239
    Abstract: A server for content creation is described. A content creation tool of the server receives, from a first device, a content identifier of a physical object, a virtual object content, and a selection of a template corresponding to an interactive feature for the virtual object content. The content creation tool generates a content dataset based on the content identifier of the physical object, the virtual object content, and the selected template. The content creation tool provides the content dataset to a second device, the second device configured to display the interactive feature corresponding to the selected template.
    Type: Grant
    Filed: June 8, 2017
    Date of Patent: December 4, 2018
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Publication number: 20180332335
    Abstract: An augmented-reality device comprises an optical sensor and a transparent display. The augmented-reality device detects, using the optical sensor, a first physical display located within a display distance of the augmented-reality device. The first physical display is connected to a computer. The augmented-reality device generates a virtual display configured to operate as a second physical display. The computer controls the second physical display. The augmented-reality device displays the virtual display in the transparent display. The virtual display appears adjacent to the first physical display.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 15, 2018
    Inventor: Brian Mullins
  • Publication number: 20180330531
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for adjusting depth of AR content on HUD. A viewing device identifies, based on sensor data, a physical object visible through a transparent display of the vehicle. The sensor data indicates an initial distance of the physical object from the vehicle. The viewing device gathers virtual content corresponding to the physical object and generates an initial presentation of the virtual content based on the initial distance. The viewing device presents the initial presentation of the virtual content on the transparent display at a position on the transparent display corresponding to the physical object. The viewing device determines, based on updated sensor data, an updated distance of the physical object and generates an updated presentation of the virtual content based on the updated distance. The viewing device presents the updated presentation of the virtual content on the transparent display of the vehicle.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 15, 2018
    Inventor: Brian Mullins
  • Publication number: 20180332266
    Abstract: Disclosed are systems, methods, and non-transitory computer-readable media for presenting spatially translated dimensions of an unseen object. A viewing device captures, with one or more sensors affixed to a vehicle, sensor data describing a distance between the vehicle and a physical object. The viewing device generates, based on the sensor data, a virtual model representing a position of at least a portion of the vehicle in relation to the physical object. The viewing device presents the virtual model on a display in the vehicle. The viewing device captures, with the one or more sensors affixed to the vehicle, updated sensor data describing an updated distance between the vehicle and the physical object, and updates the virtual model presented on the display based on the updated sensor data.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 15, 2018
    Inventor: Brian Mullins
  • Patent number: 10110883
    Abstract: A device can determine a distance to an object. The device can use the determined distance to vary a focal length of a first adjustable element so that the first adjustable element directs light from the object into a first waveguide and onto a detector, and forms an image of the object at the detector. The device can produce an image, such as augmented content, on a panel. The device can direct light from the panel into a second waveguide. The device can use the determined distance to vary a focal length of a second adjustable element so that the second adjustable element directs light out of the second waveguide and forms a virtual image of the panel in a plane coincident with the object. The device can operate as an augmented reality headset. The adjustable elements can be phase modulators, or acoustically responsive material with surface acoustic wave transducers.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: October 23, 2018
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait
  • Patent number: D850444
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: June 4, 2019
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Roy Lawrence Ashok Inigo, David Hayes, Ryan Ries, Douglas Rieck, Arash Kalantari, Siamak Sepahram, Cassie Li