Patents by Inventor Brian Mullins

Brian Mullins has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9830395
    Abstract: A system and method for spatial data processing are described. Path bundle data packages from a viewing device are accessed and processed. The path bundle data packages identify a user interaction of the viewing device with an augmented reality content relative to and based on a physical object captured by the viewing device. The path bundle data packages are generated based on the sensor data using a data model comprising a data header and a data payload. The data header comprises a contextual header having data identifying the viewing device and a user of the viewing device. A path header having data identifies the path of the interaction with the augmented reality content. A sensor header having data identifies the plurality of sensors. The data payload comprises dynamically sized sampling data from the sensor data. The path bundle data packages are normalized and aggregated. Analytics computation is performed on the normalized and aggregated path bundle data packages.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: November 28, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Frank Chester Irving, Jr.
  • Patent number: 9824437
    Abstract: A server receives datasets from mobile devices. Each dataset identifies a task selected in an augmented reality application of a corresponding mobile device and an identification of a tool detected at the corresponding mobile device. The server identifies tools present and absent at a dedicated tool board and compares an identification of the tools present and absent at the dedicated tool board with the tools detected at the mobile devices and the tasks identified at the mobile devices to generate a tool inventory and a tool compliance. The server generates an augmented reality content dataset for each mobile device to identify at least one of a missing tool, an incorrect tool, and a valid tool based on the tool compliance.
    Type: Grant
    Filed: December 11, 2015
    Date of Patent: November 21, 2017
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Publication number: 20170323486
    Abstract: A server for content creation is described. A content creation tool of the server receives, from a first device, a content identifier of a physical object, a virtual object content, and a selection of a template corresponding to an interactive feature for the virtual object content. The content creation tool generates a content dataset based on the content identifier of the physical object, the virtual object content, and the selected template.
    Type: Application
    Filed: June 8, 2017
    Publication date: November 9, 2017
    Inventor: Brian Mullins
  • Patent number: 9799142
    Abstract: A system and method for spatial data collection are described. Sensor data related to a position and an orientation of a device are generated over time using sensors of the device. Augmented reality content is generated based on a physical object captured by the device. A path bundle data package identifying a user interaction of the device with the augmented reality content relative to the physical object is generated. The user interaction identifies a spatial path of an interaction with the augmented reality content. The path bundle data package is generated based on the sensor data using a data model comprising a data header and a data payload. The data header comprises a contextual header having data identifying the device and a user of the device. A path header includes data identifying the path of the interaction with the augmented reality content. A sensor header includes data identifying the sensors. The data payload comprises dynamically sized sampling data from the sensor data.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: October 24, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Frank Chester Irving, Jr.
  • Patent number: 9799143
    Abstract: A system and method for spatial data visualization are described. An analytics computation of users' interactions with an augmented reality content is performed based on a physical object captured by a viewing device. The analytics computation comprises a computation of geometric paths of the users' interactions with the augmented reality content. A display of a visualization of the analytics computation is displayed based on the computation of the geometric paths of the users' interactions with the augmented reality content.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: October 24, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Frank Chester Irving, Jr.
  • Publication number: 20170280188
    Abstract: A server receives, from a first display device of a first user, first content data, first sensor data, and a request for assistance identifying a context of the first display device. The server identifies a second display device of a second user based on the context of the first display device. The server receives second content data and second sensor data from the second display device. The first content data is synchronized with the second content data based on the first and second sensor data. Playback parameters are formed based on the context of the first display device. An enhanced playback session is generated using the synchronized first and second content data in response to determining that the first sensor data meet the playback parameters. The enhanced playback session is communicated to the first display device.
    Type: Application
    Filed: March 24, 2017
    Publication date: September 28, 2017
    Inventor: Brian Mullins
  • Publication number: 20170277559
    Abstract: A system and method for classifying tasks are described. A server receives first sensor data from an augmented reality (AR) device and second sensor data from a first machine detected by the AR device. The first and second sensor data are related to a user of the AR device operating the first machine. The server identifies a task of the user based on the first and second sensor data. A task result of the user of the AR device operating the first machine is determined. The server accesses task patterns and corresponding levels of difficulty related to the first machine and determines a difficulty of the task based on a comparison of the task, the task result, and the task patterns and corresponding levels of difficulty. The AR application generates AR content based on the task and the difficulty of the task.
    Type: Application
    Filed: March 25, 2016
    Publication date: September 28, 2017
    Inventor: Brian Mullins
  • Publication number: 20170277259
    Abstract: A head mounted device includes a transparent display, an infrared light source, and an infrared light receiver. The transparent display has a transparent waveguide and a first and second optical component connected to the transparent waveguide. The infrared light source generates an infrared light directed at the second optical component. The transparent waveguide receives the infrared light from the second optical component and transmits the infrared light to the first optical component. The first optical component directs the infrared light to an eye of a user of the head mounted device and receives a reflection of the infrared light off the eye of the user. The infrared light receiver receives the reflection of the infrared light via the second optical component and generates reflection data based on the received reflection of the infrared light. A position of the eye of the user is identified based on the reflection data.
    Type: Application
    Filed: March 24, 2017
    Publication date: September 28, 2017
    Inventors: Brian Mullins, Ryan Ries
  • Patent number: 9773349
    Abstract: Techniques of active parallax correction are disclosed. In some embodiments, a first gaze direction of at least one eye of a user is determined. A determination about virtual content can then be made based on the first gaze direction, and the virtual content can be caused to be presented to the user based on the determination. In some embodiments, making the determination comprises determining a first location on a display surface at which to display the virtual content. In some embodiments, the virtual content can be caused to be displayed on the display surface at the first location.
    Type: Grant
    Filed: February 19, 2014
    Date of Patent: September 26, 2017
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Patent number: 9760777
    Abstract: A server for campaign optimization is described. The server generates analytics data from users interactions with a first virtual object displayed on a plurality of devices and user interactions with a first set of user interactive features of the first virtual object from a first content dataset. The server generates and provides a second content dataset to a device based on the analytics data. The second content dataset. The device recognizes an identifier from the second content dataset and displays, in the device, the second virtual object and the second set of user interactive features of the second virtual object in response to identifying the identifier.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: September 12, 2017
    Assignee: DAQRI, LLC
    Inventor: Brian Mullins
  • Publication number: 20170255450
    Abstract: A system and method for a mixed reality, spatial, cooperative programming language is described. A sensor of a device detects a first physical object and a second physical object. An augmented reality application identifies the first and second physical objects and a physical state of the first and second physical objects, generates a programming logic associated with the identification and physical state of the first and second physical objects, generates augmented or virtual reality information related to the programming logic, and displays the augmented or virtual reality information in the display.
    Type: Application
    Filed: March 4, 2016
    Publication date: September 7, 2017
    Inventors: Brian Mullins, Matthew Kammerait
  • Publication number: 20170249774
    Abstract: A system and method for offloading augmented reality processing is described. A first sensor of a server generates a first set of sensor data corresponding to a location and an orientation of a display device. The server receives a request from the display device to offload a combination of at least one of a tracking process and a rendering process from the display device. The server generates offloaded processed data based on a combination of at least one of the first set of sensor data and a second set of sensor data. The second set of sensor data is generated by a second sensor at the display device. The server streams the offloaded processed data to the display device.
    Type: Application
    Filed: May 11, 2017
    Publication date: August 31, 2017
    Inventor: Brian Mullins
  • Publication number: 20170228869
    Abstract: A system and method for multi-spectrum segmentation for computer vision is described. A first sensor captures an image within a first spectrum range and generates first sensor data. A second sensor captures an image within a second spectrum range different than the first spectrum range and generates second sensor data. A multi-spectrum segmentation module identifies a segmented portion of the image within the second spectrum range based on: the second sensor data, a subset of the first sensor data corresponding to the segmented portion of the image within the second spectrum range, and a segmented portion of the image at the second spectrum range corresponding to the subset of the first sensor data. The multi-spectrum segmentation module identifies a physical object in the segmented portion of the image within the second spectrum range, and a device generates augmented reality content based on the identified physical object.
    Type: Application
    Filed: February 9, 2016
    Publication date: August 10, 2017
    Applicant: DAQRI, LLC
    Inventors: Brian Mullins, Nalin Senthamil, Eric Douglas Lundquist
  • Patent number: 9727977
    Abstract: A system and method for sampling-based color extraction for augmented reality are described. A viewing device includes an optical sensor to capture an image of a real-world object. A color extraction software divides the captured image into multiple regions or recognizes pre-defined regions and identifies a color value for each region. A color-based augmented reality effect module retrieves a virtual content based on the color values for the regions, and delivers the virtual content in the viewing device.
    Type: Grant
    Filed: December 29, 2014
    Date of Patent: August 8, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Matthew Kammerait, Mark Sararu, Andrew Krage
  • Publication number: 20170221271
    Abstract: A remote expert application identifies a manipulation of virtual objects displayed in a first wearable device. The virtual objects are rendered based a physical object viewed with a second wearable device. A manipulation of the virtual objects is received from the first wearable device. A visualization of the manipulation of the virtual objects is generated for a display of the second wearable device. The visualization of the manipulation of the virtual objects is communicated to the second wearable device.
    Type: Application
    Filed: April 17, 2017
    Publication date: August 3, 2017
    Inventors: Brian Mullins, Matthew Kammerait, Christopher Broaddus
  • Publication number: 20170206419
    Abstract: A system and method for visualization of physical characteristics are described. A server identifies a physical object detected by a device and accesses a measurement of a first physical characteristic of the physical object from a first sensor associated with the physical object. The first sensor is coupled to the physical object. The server generates a first virtual content that represents the measurement of the first physical characteristic of the physical object. The first virtual content is communicated to the device to display the first virtual content at a first position corresponding to the first physical characteristic of the physical object in a display of the device.
    Type: Application
    Filed: March 31, 2017
    Publication date: July 20, 2017
    Inventor: Brian Mullins
  • Publication number: 20170193302
    Abstract: A wearable computing device is provided. The wearable computing device includes at least one processor, a display element configured to display augmented reality (AR) content to a wearer, a location sensor providing location information, and a task management engine executed by the at least one processor. The task management engine is configured to receive a task event identifying a task to be performed, identify a location associated with the task event, display a first AR content item to the wearer, the first AR content item is a navigational aid associated with the location, detect that the wearable computing device is proximate the location, determine a task object associated with the task event, and display a second AR content item to the wearer using the display element, the second AR content item identifies the task object to the wearer in a field of view of the display element.
    Type: Application
    Filed: January 5, 2017
    Publication date: July 6, 2017
    Inventor: Brian Mullins
  • Publication number: 20170193686
    Abstract: A server receives video data and location data from mobile devices. Each mobile device records a video of a target. The location data identifies a position of the corresponding mobile device relative to the target and a distance between the corresponding mobile device to the target. The location data is associated with a corresponding video frame from the video data. The server identifies video frames from the video data captured from the mobile devices. The server scales parts of the identified video frames based on the position and distance of the corresponding mobile devices to the target. The server extracts the scaled parts of the identified video frames and generates a three-dimensional model of the target based on the extracted scaled parts of the identified video frames from the plurality of mobile devices.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 6, 2017
    Inventor: Brian Mullins
  • Patent number: D800727
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: October 24, 2017
    Assignee: DAQRI, LLC
    Inventors: Brian Mullins, Roy Lawrence Ashok Inigo, Ryan Ries, Kyle Cherry, Kyle Florek, Cassie Li, Timotheos Leahy, Christopher Michaels Garcia, Lucas Kazansky
  • Patent number: D801587
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: October 31, 2017
    Assignee: DAQRI, LLC
    Inventors: Douglas Rieck, Brian Mullins, Ryan Ries, Hien Nguyen, Arash Kalantari, Timotheos Leahy