Patents by Inventor Daniel Aden
Daniel Aden has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190170617Abstract: A method for preparing a sample of organic material for laser induced breakdown spectroscopy (LIBS) may include obtaining granular organic material, forming a portion of the granular organic material into a sample pellet, and searing the organic material. The searing may include searing only an exposed end surface of the sample pellet on which LIBS analysis is to be performed. The method may include pressing the seared sample pellet to consolidate the material comprising the seared end surface.Type: ApplicationFiled: September 8, 2016Publication date: June 6, 2019Applicant: FOSS Analytical A/SInventors: Thomas NIKOLAJSEN, Daniel ADEN
-
Patent number: 10270528Abstract: Systems are provided to emit, into an environment of interest, information in the form of modulated optical signals. These optical signals can be provided as illumination from a lighting fixture, display, or other source of environmental illumination. The optical signals can include codes or other information to facilitate location-specific operations of a device that is able to receive the optical signals. This can include receiving information about the location of a light emitter, security credentials or encryption keys, information about services that are available from building automation and/or conferencing systems, or other location-related information.Type: GrantFiled: June 30, 2016Date of Patent: April 23, 2019Assignee: Google LLCInventors: Matthew Amacker, Arshan Poursohi, Daniel Aden
-
Patent number: 10215858Abstract: Examples relating to the detection of rigid shaped objects are described herein. An example method may involve a computing system determining a first point cloud representation of an environment at a first time using a depth sensor positioned within the environment. The computing system may also determine a second point cloud representation of the environment at a second time using the depth sensor. This way, the computing system may detect a change in position of a rigid shape between a first position in the first point cloud representation and a second position in the second point cloud representation. Based on the detected change in position of the rigid shape, the computing system may determine that the rigid shape is representative of an object in the environment and store information corresponding to the object.Type: GrantFiled: June 30, 2016Date of Patent: February 26, 2019Assignee: Google LLCInventors: Greg Joseph Klein, Arshan Poursohi, Sumit Jain, Daniel Aden
-
Publication number: 20190014310Abstract: This disclosure is directed to a hardware system for inverse graphics capture. An inverse graphics capture system (IGCS) captures data regarding a physical space that can be used to generate a photorealistic graphical model of that physical space. In certain approaches, the system includes hardware and accompanying software used to create a photorealistic six degree of freedom (6DOF) graphical model of the physical space. In certain approaches, the system includes hardware and accompanying software used for projection mapping onto the physical space. In certain approaches, the model produced by the IGCS is built using data regarding the geometry, lighting, surfaces, and environment of the physical space. In certain approaches, the model produced by the IGCS is both photorealistic and fully modifiable.Type: ApplicationFiled: August 6, 2018Publication date: January 10, 2019Applicant: Arraiy, Inc.Inventors: Gary Bradski, Moshe Benezra, Daniel A. Aden, Ethan Rublee
-
Publication number: 20180348023Abstract: Implementations disclosed herein may relate to sensor calibration based on environmental factors. An example method may involve a computing system receiving an indication of a current environment state of an area from environment state sensors. While the environment sensors indicate that the area is in a particular environment state, the system may receive data corresponding to an aspect of the area from a first sensor as well as data from additional sensors. Using the received data, the system may compare the data from the first sensor with a compilation of the data from the additional sensors to determine an accuracy metric that represents an accuracy of the first sensor when the first sensor operates in the area during the particular environment state. The system may repeat the process to determine accuracy metrics to calibrate sensors in the area depending on the environment state of the area.Type: ApplicationFiled: June 9, 2015Publication date: December 6, 2018Inventors: Greg Joseph Klein, Daniel Aden, Arshan Poursohi
-
Publication number: 20180299268Abstract: Implementations disclosed herein may include a system and method for engaging in a technique for calibrating and/or boosting the accuracy of an arrangement of sensors positioned about a physical space. In one implementation, a method includes receiving from a primary sensor an indication of a particular location of a subject and receiving a first feature from a plurality of secondary sensors. The first feature may comprise a set of estimated locations of the subject. The method may further include resolving the first feature as being indicative of the particular location, receiving a second feature from the plurality of secondary sensors, identifying a match between the second feature and the first feature, and based on the identifying, determining a location of the new subject to be the particular location.Type: ApplicationFiled: February 20, 2015Publication date: October 18, 2018Inventors: Greg Klein, Daniel Aden, Arshan Poursohi
-
Patent number: 10044922Abstract: This disclosure is directed to a hardware system for inverse graphics capture. An inverse graphics capture system (IGCS) captures data regarding a physical space that can be used to generate a photorealistic graphical model of that physical space. In certain approaches, the system includes hardware and accompanying software used to create a photorealistic six degree of freedom (6DOF) graphical model of the physical space. In certain approaches, the system includes hardware and accompanying software used for projection mapping onto the physical space. In certain approaches, the model produced by the IGCS is built using data regarding the geometry, lighting, surfaces, and environment of the physical space. In certain approaches, the model produced by the IGCS is both photorealistic and fully modifiable.Type: GrantFiled: July 6, 2017Date of Patent: August 7, 2018Assignee: Arraiy, Inc.Inventors: Gary Bradski, Moshe Benezra, Daniel A. Aden, Ethan Rublee
-
Patent number: 10025308Abstract: Example systems and methods are disclosed for associating detected attributes with an actor. An example method may include receiving point cloud data for a first actor at a first location within the environment. The method may include associating sensor data from an additional sensor with the first actor based on the sensor data being representative of the first location. The method may include identifying one or more attributes of the first actor based on the sensor data. The method may include subsequently receiving a second point cloud representative of a second actor at a second location within the environment. The method may include determining, based on additional sensor data from the additional sensor, that the second actor has the one or more attributes. The method may include providing a signal indicating that the first actor is the second actor based on the second actor having the one or more attributes.Type: GrantFiled: February 19, 2016Date of Patent: July 17, 2018Assignee: Google LLCInventors: Arshan Poursohi, Greg Klein, Daniel Aden, Matthew Amacker
-
Patent number: 10026189Abstract: Example systems and methods are disclosed for determining the direction of an actor based on image data and sensors in an environment. The method may include receiving point cloud data for an actor at a location within the environment. The method may also include receiving image data of the location. The received image data corresponds to the point cloud data received from the same location. The method may also include identifying a part of the received image data that is representative of the face of the actor. The method may further include determining a direction of the face of the actor based on the identified part of the received image data. The method may further include determining a direction of the actor based on the direction of the face of the actor. The method may also include providing information indicating the determined direction of the actor.Type: GrantFiled: May 22, 2017Date of Patent: July 17, 2018Assignee: Google LLCInventors: Paul Vincent Byrne, Daniel Aden
-
Patent number: 9787908Abstract: Systems are provided to facilitate imaging of a person or other target in an environment by providing modulated illumination to the target and to other aspects of the environment. Modulated illumination is provided to the target such that the target receives more illumination when a camera is capturing images of the target than during other periods of time. Modulated illumination is provided to background objects or other portions of the environment of the target such that the background or other non-target elements of the environment receive less illumination when the camera is capturing images than during other periods of time. In this way, imaging of a target can be improved by increasing effective illumination of the target while decreasing glare and other effects of illumination of background objects. The illumination can be modulated at a sufficiently high frequency that the illumination appears, to the human eye, to be substantially constant.Type: GrantFiled: July 1, 2016Date of Patent: October 10, 2017Assignee: Google Inc.Inventors: Arshan Poursohi, Daniel Aden, Matthew Amacker
-
Publication number: 20170263002Abstract: Example systems and methods are disclosed for determining the direction of an actor based on image data and sensors in an environment. The method may include receiving point cloud data for an actor at a location within the environment. The method may also include receiving image data of the location. The received image data corresponds to the point cloud data received from the same location. The method may also include identifying a part of the received image data that is representative of the face of the actor. The method may further include determining a direction of the face of the actor based on the identified part of the received image data. The method may further include determining a direction of the actor based on the direction of the face of the actor. The method may also include providing information indicating the determined direction of the actor.Type: ApplicationFiled: May 22, 2017Publication date: September 14, 2017Inventors: Paul Vincent Byrne, Daniel Aden
-
Patent number: 9691153Abstract: Example systems and methods are disclosed for determining the direction of an actor based on image data and sensors in an environment. The method may include receiving point cloud data for an actor at a location within the environment. The method may also include receiving image data of the location. The received image data corresponds to the point cloud data received from the same location. The method may also include identifying a part of the received image data that is representative of the face of the actor. The method may further include determining a direction of the face of the actor based on the identified part of the received image data. The method may further include determining a direction of the actor based on the direction of the face of the actor. The method may also include providing information indicating the determined direction of the actor.Type: GrantFiled: October 21, 2015Date of Patent: June 27, 2017Assignee: Google Inc.Inventors: Paul Vincent Byrne, Daniel Aden
-
Publication number: 20170148217Abstract: Example implementations may relate to methods and systems for detecting an event in a physical region within a physical space. Accordingly, a computing system may receive from a subscriber device an indication of a virtual region within a virtual representation of the physical space such that the virtual region corresponds to the physical region. The system may also receive from the subscriber a trigger condition associated with the virtual region, where the trigger condition corresponds to a particular physical change in the physical region. The system may also receive sensor data from sensors in the physical space and a portion of the sensor data may be associated with the physical region. Based on the sensor data, the system may detect an event in the physical region that satisfies the trigger condition and may responsively provide to the subscriber a notification that indicates that the trigger condition has been satisfied.Type: ApplicationFiled: July 19, 2016Publication date: May 25, 2017Inventors: Arshan Poursohi, Daniel Aden, Matthew Amacker, Charles Robert Barker, Paul Vincent Byrne, Paul Du Bois, Greg Joseph Klein, Steve Scott Tompkins