Patents by Inventor Douglas Haanpaa
Douglas Haanpaa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20190137284Abstract: An in-vehicle system for generating precise, lane-level road map data includes a GPS receiver operative to acquire positional information associated with a track along a road path. An inertial sensor provides time local measurement of acceleration and turn rate along the track, and a camera acquires image data of the road path along the track. A processor is operative to receive the local measurement from the inertial sensor and image data from the camera over time in conjunction with multiple tracks along the road path, and improve the accuracy of the GPS receiver through curve fitting. One or all of the GPS receiver, inertial sensor and camera are disposed in a smartphone. The road map data may be uploaded to a central data repository for post processing when the vehicle passes through a WiFi cloud to generate the precise road map data, which may include data collected from multiple drivers.Type: ApplicationFiled: November 6, 2017Publication date: May 9, 2019Inventors: Charles J. Jacobus, Glenn J. Beach, Douglas Haanpaa
-
Publication number: 20190088148Abstract: Autonomous and manually operated vehicles are integrated into a cohesive, interactive environment, with communications to each other and to their surroundings, to improve traffic flow while reducing accidents and other incidents. All vehicles send/receive messages to/from each other, and from infrastructure devices, enabling the vehicles to determine their status, traffic conditions and infrastructure. The vehicles store and operate in accordance with a common set of rules based upon the messages received and other inputs from sensors, databases, and so forth, to avoid obstacles and collisions based upon current and, in some cases, future or predicted behavior. Shared vehicle control interfaces enable the AVs to conform to driving activities that are legal, safe, and allowable on roadways. Such activities enable each AV to drive within safety margins, speed limits, on allowed or legal driving lanes and through allowed turns, intersections, mergers, lane changes, stops/starts, and so forth.Type: ApplicationFiled: November 5, 2018Publication date: March 21, 2019Inventors: Charles J. Jacobus, Douglas Haanpaa, Eugene Foulk, Pritpaul Mahal, Steve Rowe, Charles J. Cohen, Glenn J. Beach
-
Publication number: 20180253099Abstract: Autonomously driven vehicles operate in rain, snow and other adverse weather conditions. An on-board vehicle sensor has a beam with a diameter that is only intermittently blocked by rain, snow, dustor other obscurant particles. This allows an obstacle detection processor is to tell the difference between obstacles, terrain variations and obscurant particles, thereby enabling the vehicle driving control unit to disregard the presence of obscurant particles along the route taken by the vehicle. The sensor may form part of a LADAR or RADAR system or a video camera. The obstacle detection processor may receive time-spaced frames divided into cells or pixels, whereby groups of connected cells or pixels and/or cells or pixels that persist over longer periods of time are interpreted to be obstacles or terrain variations. The system may further including an input for receiving weather-specific configuration parameters to adjust the operation of the obstacle detection processor.Type: ApplicationFiled: May 4, 2018Publication date: September 6, 2018Inventors: Charles J. Jacobus, Douglas Haanpaa
-
Patent number: 9989967Abstract: Autonomously driven vehicles operate in rain, snow and other adverse weather conditions. An on-board vehicle sensor has a beam with a diameter that is only intermittently blocked by rain, snow, dust or other obscurant particles. This allows an obstacle detection processor is to tell the difference between obstacles, terrain variations and obscurant particles, thereby enabling the vehicle driving control unit to disregard the presence of obscurant particles along the route taken by the vehicle. The sensor may form part of a LADAR or RADAR system or a video camera. The obstacle detection processor may receive time-spaced frames divided into cells or pixels, whereby groups of connected cells or pixels and/or cells or pixels that persist over longer periods of time are interpreted to be obstacles or terrain variations. The system may further including an input for receiving weather-specific configuration parameters to adjust the operation of the obstacle detection processor.Type: GrantFiled: March 4, 2014Date of Patent: June 5, 2018Inventors: Charles J. Jacobus, Douglas Haanpaa
-
Publication number: 20180129220Abstract: Autonomous vehicles are capable of executing missions that abide by on-street rules or regulations, while also being able to seamlessly transition to and from “zones,” including off-street zones, with their our set(s) of rules or regulations. An on-board memory stores roadgraph information. An on-board computer is operative to execute commanded driving missions using the roadgraph information, including missions with one or more zones, each zone being defined by a sub-roadgraph with its own set of zone-specific driving rules and parameters. A mission may be coordinated with one or more payload operations, including zone with “free drive paths” as in a warehouse facility with loading and unloading zones to pick up payloads and place them down, or zone staging or entry points to one or more points of payload acquisition or placement. The vehicle may be a warehousing vehicle such as a forklift.Type: ApplicationFiled: November 8, 2017Publication date: May 10, 2018Inventors: Glenn Beach, Douglas Haanpaa, Charles J. Jacobus, Steven Rowe
-
Publication number: 20170154459Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: ApplicationFiled: November 1, 2016Publication date: June 1, 2017Inventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Patent number: 9483867Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: GrantFiled: July 14, 2015Date of Patent: November 1, 2016Assignee: Cybernet Systems CorporationInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Publication number: 20160093097Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: ApplicationFiled: July 14, 2015Publication date: March 31, 2016Inventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Publication number: 20160077671Abstract: A system and method including software to aid in generation of panels and control instruments rapidly generates a station that can support a variety of control interfaces. Rapid-Prototyped Panels, or RP-Panels, replicate existing systems (for simulation, training, gaming, etc.) or from new designs (for human factors testing, as functional product, etc.). The controls have tactile and visual characteristics similar or identical to their functional component counterparts such as buttons, knobs, switches, pedals, joysticks, steering wheels, and touch panels but are modular and use alternative data transfer modes (potentiometers, fiber optics, RFID, machine vision, etc.) to track and analyze the response of the controls. The response is then transmitted to the host programs. With this method a user can design and fabricate a reconfigurable interface to interact with virtual environments for various applications such as simulation, training, virtual instrumentation, gaming, human factors testing, etc.Type: ApplicationFiled: November 23, 2015Publication date: March 17, 2016Inventors: Christopher R. Wagner, Amanda Christiana, Douglas Haanpaa, Charles J. Jacobus
-
Publication number: 20150339355Abstract: An application coherency manager (ACM) implements and manages the interdependencies of simulation, data, and platform information to simplify the task of organizing and executing large simulations composed of numerous models and data files. One or more file systems or repositories stories raw data in the form of files, data, or models, and a graphical users interface (GUI) enabling a user to enter and receive results from a query involving the files, data, or models. One or more coherency checking modules (CCMs) are operative to determine the types and versions of, and compatibility between, the files, data, or models. A database stores processed information about the file systems or repositories and the results of previous queries, and a data aggregator and manager (DAM) that manages the flow of information between the file system or repository, the GUI, the CCMs, and the database.Type: ApplicationFiled: June 8, 2015Publication date: November 26, 2015Inventors: Douglas Haanpaa, Glenn J. Beach, Charles J. Jacobus, Devvan Stokes
-
Patent number: 9195886Abstract: A system and method including software to aid in generation of panels and control instruments rapidly generates a station that can support a variety of control interfaces. Rapid-Prototyped Panels, or RP-Panels, replicate existing systems (for simulation, training, gaming, etc.) or from new designs (for human factors testing, as functional product, etc.). The controls have tactile and visual characteristics similar or identical to their functional component counterparts such as buttons, knobs, switches, pedals, joysticks, steering wheels, and touch panels but are modular and use alternative data transfer modes (potentiometers, fiber optics, RFID, machine vision, etc.) to track and analyze the response of the controls. The response is then transmitted to the host programs. With this method a user can design and fabricate a reconfigurable interface to interact with virtual environments for various applications such as simulation, training, virtual instrumentation, gaming, human factors testing, etc.Type: GrantFiled: January 9, 2009Date of Patent: November 24, 2015Assignee: Cybernet Systems CorporationInventors: Christopher R. Wagner, Amanda Christiana, Douglas Haanpaa, Charles J. Jacobus
-
Publication number: 20150253775Abstract: Autonomously driven vehicles operate in rain, snow and other adverse weather conditions. An on-board vehicle sensor has a beam with a diameter that is only intermittently blocked by rain, snow, dustor other obscurant particles. This allows an obstacle detection processor is to tell the difference between obstacles, terrain variations and obscurant particles, thereby enabling the vehicle driving control unit to disregard the presence of obscurant particles along the route taken by the vehicle. The sensor may form part of a LADAR or RADAR system or a video camera. The obstacle detection processor may receive time-spaced frames divided into cells or pixels, whereby groups of connected cells or pixels and/or cells or pixels that persist over longer periods of time are interpreted to be obstacles or terrain variations. The system may further including an input for receiving weather-specific configuration parameters to adjust the operation of the obstacle detection processor.Type: ApplicationFiled: March 4, 2014Publication date: September 10, 2015Applicant: Cybernet Systems CorporationInventors: Charles J. Jacobus, Douglas Haanpaa
-
Patent number: 9082219Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: GrantFiled: April 1, 2014Date of Patent: July 14, 2015Assignee: Cybernet Systems CorporationInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Publication number: 20140320486Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: ApplicationFiled: April 1, 2014Publication date: October 30, 2014Applicant: CYBERNET SYSTEMS CORPORATIONInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Patent number: 8687849Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. It is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: GrantFiled: April 3, 2012Date of Patent: April 1, 2014Assignee: Cybernet Systems CorporationInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Publication number: 20120263348Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. It is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: ApplicationFiled: April 3, 2012Publication date: October 18, 2012Applicant: Cybernet Systems CorporationInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Patent number: 8150101Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: GrantFiled: November 12, 2007Date of Patent: April 3, 2012Assignee: Cybernet Systems CorporationInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Publication number: 20110102419Abstract: A system for performing object identification combines pose determination, EO/IR sensor data, and novel computer graphics rendering techniques. A first module extracts the orientation and distance of a target in a truth chip given that the target type is known. A second is a module identifies the vehicle within a truth chip given the known distance and elevation angle from camera to target. Image matching is based on synthetic image and truth chip image comparison, where the synthetic image is rotated and moved through a 3-Dimensional space. To limit the search space, it is assumed that the object is positioned on relatively flat ground and that the camera roll angle stays near zero. This leaves three dimensions of motion (distance, heading, and pitch angle) to define the space in which the synthetic target is moved. A graphical user interface (GUI) front end allows the user to manually adjust the orientation of the target within the synthetic images.Type: ApplicationFiled: November 12, 2007Publication date: May 5, 2011Applicant: Cybernet Systems CorporationInventors: Douglas Haanpaa, Charles J. Cohen, Glenn J. Beach, Charles J. Jacobus
-
Patent number: 7765182Abstract: Methods are presented for authoring geometrical databases which incorporate touch or haptic feedback. In particular, a database of geometrical elements incorporates attributes necessary to support haptic interactions such as stiffness, hardness, friction, and so forth. Users may instantiate objects designed through CAD/CAM environments or attach haptic or touch attributes to subcomponents such as surfaces or solid sub-objects. The resulting haptic/visual databases or world-describing models can then be viewed and touched using a haptic browser or other appropriate user interface.Type: GrantFiled: January 29, 2007Date of Patent: July 27, 2010Assignee: Immersion CorporationInventors: Thomas M. Peurach, Todd Yocum, Douglas Haanpaa
-
Publication number: 20100045701Abstract: Systems and methods expedite and improve the process of configuring an augmented reality environment. A method of pose determination according to the invention includes the step of placing at least one synthetic fiducial in a real environment to be augmented. A camera, which may include apparatus for obtaining directly measured camera location and orientation (DLMO) information, is used to acquire an image of the environment. The natural and synthetic fiducials are detected, and the pose of the camera is determined using a combination of the natural fiducials, the synthetic fiducial if visible in the image, and the DLMO information if determined to be reliable or necessary. The invention is not limited to architectural environments, and may be used with instrumented persons, animals, vehicles, and any other augmented or mixed reality applications.Type: ApplicationFiled: August 24, 2009Publication date: February 25, 2010Applicant: Cybernet Systems CorporationInventors: Katherine Scott, Douglas Haanpaa, Charles J. Jacobus