POINT CLOUD DATA HIERARCHY
One method embodiment comprises storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin.
Latest Willow Garage, Inc. Patents:
The present application is a continuation of U.S. patent application Ser. No. 16/014,266, filed on Jun. 21, 2018, which is a continuation of U.S. patent application Ser. No. 15/813,890, filed on Nov. 15, 2017 now abandoned, which is a continuation of U.S. patent application Ser. No. 15/486,172, filed on Apr. 12, 2017 now abandoned, which is a continuation of U.S. patent application Ser. No. 15/239,663, filed on Aug. 17, 2016 now abandoned, which is a continuation of U.S. patent application Ser. No. 14/718,594 filed on May 21, 2015 now abandoned, which is a continuation of U.S. patent application Ser. No. 13/789,554 filed on Mar. 7, 2013 now abandoned, which claims the benefit under 35 U.S.C. § 119 to U.S. Provisional Application Ser. No. 61/607,833 filed Mar. 7, 2012. The foregoing applications are hereby incorporated by reference into the present application in their entirety.
FIELD OF THE INVENTIONThe present invention relates generally to point cloud processing, storage, and image construction systems and techniques, and more particularly to configurations for efficiently presenting images to an operator using one or more point subsets taken from a point cloud comprising a very large number of points.
BACKGROUNDThe collection of very large point clouds has become somewhat conventional given modern scanning hardware, such as the Hi-definition LIDAR systems available from Velodyne corporation of Morgan Hill, Calif., under the tradename HDL-64E™. Such systems may be coupled to vehicles such as automobiles or airplanes to create very large point datasets (i.e., in the range of billions of points or more) that can become quite unruly to process, even with modern computing equipment, due to limitations in componentry such as main computer memory. Indeed, notwithstanding current efforts to gather point cloud data to, for example, create a detailed national topography database, the processing and sharing of such data remains a challenge due to the sheer size and file structure of the point clouds. For example, if the U.S. government creates a detailed point cloud over a particular county in one state using fly-over LIDAR, and a researcher or agency desires to analyze this data and conventional techniques to determine how many stop signs are on roads within the county, such analysis will present not only a data collaboration problem, but also a storage and processing challenge even if a clear algorithm is identified for detecting a stop sign automatically based upon a particular portion of the subject point cloud. One solution to at least some of the data sharing challenges remains to ship a hard drive from one party to another if the data fits on a hard drive, but this is obviously suboptimal relative to what the users would do with two connected client systems if they had the ability to share the dataset as if it was a much smaller dataset. Another challenge, of course, is in the processing of what likely is a relatively large point cloud with conventionally-available computing power (i.e., such as that typically available to a consumer or engineer). There is a need for streamlined solutions for storing, processing, and collaborating using very large point cloud datasets.
SUMMARYOne embodiment is directed to a method for presenting multi-resolution views of a very large point data set, comprising: storing data on a storage system that is representative of a point cloud comprising a very large number of associated points; organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution; receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin. Storing may comprise accessing a storage cluster. The method further may comprise using a network to intercouple the storage system, controller, and user interface. At least one portion of the network may be accessible to the internet. The method further may comprise generating the user interface with a computing system that houses the controller. The method further may comprise presenting the user interface to the user within a web browser. The user interface may be configured such that the user may adjust the selected origin and vector using an input device, causing the controller to assemble a new image based at least in part upon the adjusted origin and vector. The very large number of associated points may be greater than 1 billion points. The point cloud may have a uniform point pitch. The point cloud may have a point pitch that is less than about one meter. The point cloud may have a point pitch that is less than about 1 centimeter. The point cloud may represent data that has been collected based upon distance measurement scans of objects. The point cloud may be representative of at least one LIDAR scan. The octree hierarchy of data sectors may be configured such that an N level sector represents a centroid of points at the N+1 level below. Each point may be weighted equally in determining the centroid. The points comprising the point cloud may not all be weighted equally in determining the centroid. The method further may comprise using the controller to store data sectors of similar octree mesh resolution in similar accessibility configurations on the storage system. The controller may be configured to store data sectors of similar octree mesh resolution on a common storage device. The controller may be configured to store data sectors of similar octree mesh resolution such that they have similar retrieval latencies from the storage system. The image may comprise a gradient of octree mesh resolution data sectors in a direction outward from the selected viewing origin.
One of the important ingredients to facilitating efficient storing, processing, and collaborating using very large point cloud datasets is some kind of organizational data structure configuration, because handling all of the data in the global data set at maximum resolution would likely overburden available computing resources. Referring to
Referring to
Referring to
Referring to
Referring to
To produce a view or composite image from the point cloud for a user, the user typically first must provide some information regarding the data “frustrum” of interest, or the data that he intends to be within the simulated field of view, which may be defined by a point origin within or outside of the point cloud, a vector originating at the point origin and having a three-dimensional vector orientation, and a field capture width (somewhat akin to an illumination beam width when a flashlight is shined into the dark: the field capture width is like the beam width in that it defines what may be seen by the operator in the images; it may have a cross-sectional shape, or “field capture shape” shape, that is substantially circular, oval, binocular, rectangular, etc). In one embodiment, significant speed of retrieval and processing efficiencies may be obtained by producing hybrid resolution, or multi-resolution images or views for a user that comprise assemblies of portions of the data cloud at resolutions that increase as the sectors get closer to the origin defined for the particular view being assembled. For example, in one embodiment, if a user has a point cloud that is representative of a deep forest of many trees, and the user selects an origin, vector, and field capture width and shape to provide him with a certain view of the forest, it generally is much more efficient to provide the sectors most immediate to the selected viewpoint at a higher resolution (i.e., down the data hierarchy) than for the sectors fartest away from the selected viewpoint. In other words, if the trees in the front of the view are going to block the trees to the extreme back anyway, why bring in the maximum resolution representation of the trees in the back, only to have visibility of them blocked anyway—so a lower resolution representation of these trees to the extreme back may be assembled. In one embodiment a resolution gradient may be selected to tune the difference in resolution of elements in the extreme back of the view versus the extreme close; further, the gradient may be tuned to have linear change in resolution from back to front, nonlinear, stepwise at certain distance thresholds, and the like. In one embodiment gradient variables may be tunable by an operator depending upon computing and bandwidth resources as well.
Referring to
Referring to
Referring to
Referring to
Referring to
Various exemplary embodiments of the invention are described herein. Reference is made to these examples in a non-limiting sense. They are provided to illustrate more broadly applicable aspects of the invention. Various changes may be made to the invention described and equivalents may be substituted without departing from the true spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation, material, composition of matter, process, process act(s) or step(s) to the objective(s), spirit or scope of the present invention. Further, as will be appreciated by those with skill in the art that each of the individual variations described and illustrated herein has discrete components and features which may be readily separated from or combined with the features of any of the other several embodiments without departing from the scope or spirit of the present inventions. All such modifications are intended to be within the scope of claims associated with this disclosure.
Any of the devices described for carrying out the subject diagnostic or interventional procedures may be provided in packaged combination for use in executing such interventions. These supply “kits” may further include instructions for use and be packaged in containers as commonly employed for such purposes.
The invention includes methods that may be performed using the subject devices. The methods may comprise the act of providing such a suitable device. Such provision may be performed by the end user. In other words, the “providing” act merely requires the end user obtain, access, approach, position, set-up, activate, power-up or otherwise act to provide the requisite device in the subject method. Methods recited herein may be carried out in any order of the recited events which is logically possible, as well as in the recited order of events.
Exemplary aspects of the invention, together with details regarding material selection and manufacture have been set forth above. As for other details of the present invention, these may be appreciated in connection with the above-referenced patents and publications as well as generally known or appreciated by those with skill in the art. The same may hold true with respect to method-based aspects of the invention in terms of additional acts as commonly or logically employed.
In addition, though the invention has been described in reference to several examples optionally incorporating various features, the invention is not to be limited to that which is described or indicated as contemplated with respect to each variation of the invention. Various changes may be made to the invention described and equivalents (whether recited herein or not included for the sake of some brevity) may be substituted without departing from the true spirit and scope of the invention. In addition, where a range of values is provided, it is understood that every intervening value, between the upper and lower limit of that range and any other stated or intervening value in that stated range, is encompassed within the invention.
Also, it is contemplated that any optional feature of the inventive variations described may be set forth and claimed independently, or in combination with any one or more of the features described herein. Reference to a singular item, includes the possibility that there are plural of the same items present. More specifically, as used herein and in claims associated hereto, the singular forms “a,” “an,” “said,” and “the” include plural referents unless the specifically stated otherwise. In other words, use of the articles allow for “at least one” of the subject item in the description above as well as claims associated with this disclosure. It is further noted that such claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as “solely,” “only” and the like in connection with the recitation of claim elements, or use of a “negative” limitation.
Without the use of such exclusive terminology, the term “comprising” in claims associated with this disclosure shall allow for the inclusion of any additional element—irrespective of whether a given number of elements are enumerated in such claims, or the addition of a feature could be regarded as transforming the nature of an element set forth in such claims. Except as specifically defined herein, all technical and scientific terms used herein are to be given as broad a commonly understood meaning as possible while maintaining claim validity.
The breadth of the present invention is not to be limited to the examples provided and/or the subject specification, but rather only by the scope of claim language associated with this disclosure.
Claims
1. A method for presenting multi-resolution views of a very large point data set, comprising:
- a. storing data on a storage system that is representative of a point cloud comprising a very large number of associated points;
- b. organizing the data into an octree hierarchy of data sectors, each of which is representative of one or more of the points at a given octree mesh resolution;
- c. receiving a command from a user of a user interface to present an image based at least in part upon a selected viewing perspective origin and vector; and
- d. assembling the image based at least in part upon the selected origin and vector, the image comprising a plurality of data sectors pulled from the octree hierarchy, the plurality of data sectors being assembled such that sectors representative of points closer to the selected viewing origin have a higher octree mesh resolution than that of sectors representative of points farther away from the selected viewing origin.
2. The method of claim 1, wherein storing comprises accessing a storage cluster.
3. The method of claim 1, further comprising using a network to intercouple the storage system, controller, and user interface.
4. The method of claim 3, wherein at least one portion of the network is accessible to the internet.
5. The method of claim 1, further comprising generating the user interface with a computing system that houses the controller.
6. The method of claim 1, further comprising presenting the user interface to the user within a web browser.
7. The method of claim 1, wherein the user interface is configured such that the user may adjust the selected origin and vector using an input device, causing the controller to assemble a new image based at least in part upon the adjusted origin and vector.
8. The method of claim 1, wherein the very large number of associated points is greater than 1 billion points.
9. The method of claim 1, wherein the point cloud has a uniform point pitch.
10. The method of claim 1, wherein the point cloud has a point pitch that is less than about one meter.
11. The method of claim 10, wherein the point cloud has a point pitch that is less than about 1 centimeter.
12. The method of claim 1, wherein the point cloud represents data that has been collected based upon distance measurement scans of objects.
13. The method of claim 12, wherein the point cloud represents at least one LIDAR scan.
14. The method of claim 1, wherein the octree hierarchy of data sectors is configured such that an N level sector represents a centroid of points at the N+1 level below.
15. The method of claim 14, wherein each point is weighted equally in determining the centroid.
16. The method of claim 14, wherein the points comprising the point cloud are not all weighted equally in determining the centroid.
17. The method of claim 1, further comprising using the controller to store data sectors of similar octree mesh resolution in similar accessibility configurations on the storage system.
18. The method of claim 17, wherein the controller is configured to store data sectors of similar octree mesh resolution on a common storage device.
19. The method of claim 17, wherein the controller is configured to store data sectors of similar octree mesh resolution such that they have similar retrieval latencies from the storage system.
20. The method of claim 1, wherein the image comprises a gradient of octree mesh resolution data sectors in a direction outward from the selected viewing origin.
Type: Application
Filed: Jan 30, 2019
Publication Date: Jun 6, 2019
Applicant: Willow Garage, Inc. (Menlo Park, CA)
Inventors: Eitan Marder-Eppstein (San Francisco, CA), Stuart Glaser (San Francisco, CA), Wim Meeussen (Redwood City, CA)
Application Number: 16/262,710