3D DATA TO 2D AND ISOMETRIC VIEWS FOR LAYOUT AND CREATION OF DOCUMENTS

- KNOCKOUT CONCEPTS, LLC

This application relates to methods for generating two-dimensional images from three-dimensional model data. A process according to the application may begin with providing a set of three-dimensional model data of a subject, and determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data. A user or an algorithm may select a view of the three-dimensional model data to convert to a two-dimensional image. The process may further include determining an outline of the three-dimensional model corresponding to the selected view, and projecting the outline of the three-dimensional model and a visible portion of the set of boundaries onto a two-dimensional image plane.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
I. BACKGROUND OF THE INVENTION

A. Field of Invention

Embodiments generally relate to creating technical drawings from 3D model data.

B. Description of the Related Art

A variety of methods are known in the art for generating 2D images from 3D models. For instance, it is known to generate a collage of 2D renderings that represent a 3D model. It is further known to identify vertices and edges of objects in images. The prior art also includes methods for flattening 3D surfaces to 2D quadrilateral line drawings in a 2D image plane. However, the art is deficient in a number of regards. For instance, the prior art does not teach or suggest fitting a 3D point cloud to a set of simple 2D surfaces, determining boundaries and vertices of the 2D surfaces and projecting them onto an image plane.

Some embodiments of the present invention may provide one or more benefits or advantages over the prior art.

II. SUMMARY OF THE INVENTION

Some embodiments may relate to a method for generating two-dimensional images, comprising the steps of: providing a set of three-dimensional model data of a subject; determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data; selecting a view of the three-dimensional model data to convert to a two-dimensional image; determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.

Embodiments may further comprise projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

According to some embodiments the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

According to some embodiments the three-dimensional model data comprises a point cloud.

Embodiments may further comprise the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.

According to some embodiments a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.

According to some embodiments the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.

According to some embodiments the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.

According to some embodiments the three-dimensional model data comprises a mesh.

According to some embodiments the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

Other benefits and advantages will become apparent to those skilled in the art to which it pertains upon reading and understanding of the following detailed specification.

III. BRIEF DESCRIPTION OF THE DRAWINGS

The invention may take physical form in certain parts and arrangement of parts, embodiments of which will be described in detail in this specification and illustrated in the accompanying drawings which form a part hereof and wherein:

FIG. 1 is a flowchart showing an image conversion process according to an embodiment of the invention;

FIG. 2 is a schematic view of a user capturing 3D model data with a 3D scanning device;

FIG. 3 is a drawing of a point cloud being converted into an isometric drawing;

FIG. 4 is a drawing showing the use of a set of simple surfaces for generating 2D drawings;

FIG. 5 is a drawing of a device according to an embodiment of the invention; and

FIG. 6 is an illustrative printout according to an embodiment of the invention.

IV. DETAILED DESCRIPTION OF THE INVENTION

A method for generating two-dimensional images includes determining a set of boundaries between intersecting surfaces of three-dimensional model data corresponding to an object. A specific view of the three-dimensional model data, for which the two-dimensional images are required, is selected. Upon selection of the specific view, the outline of the three-dimensional model data corresponding to the selected view is determined and corresponding invisible portion of the boundaries, due to opacity of the object, is identified. The outline of the three-dimensional model data and the visible portion of the boundaries so determined are projected onto a two-dimensional image plane.

Referring now to the drawings wherein the showings are for purposes of illustrating embodiments of the invention only and not for purposes of limiting the same, FIG. 1 depicts a flow diagram 100 of an illustrative embodiment wherein three-dimensional data 110 is provided for the purpose of generating corresponding two-dimensional images. The three dimensional data may be in the form of point cloud or mesh representation of a three-dimensional subject. Furthermore, any and all other forms of three-dimensional data representation, now known or developed in the future, that are capable of being converted to point cloud or mesh form may be used.

The point cloud or mesh may be further converted to a set or sets of continuous simple surfaces by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. Any simple geometric surface including but not limited to a planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface may be used to represent the point cloud as the set of simple continuous surfaces.

A set of boundaries between intersecting surfaces of the three-dimensional model data is determined 112. In an illustrative embodiment this determination of a set of boundaries may be achieved by using a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. All these methods are well understood in the art and their methodologies are incorporated by reference herein. In an alternate embodiment wherein the three-dimensional model data is represented as a mesh, the set of boundaries may be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

Once the set of boundaries between intersecting surfaces of the three-dimensional model data is determined, a view of the image data for which two-dimensional images are required is selected 114. In one embodiment, the view may be selected by orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible. Based on the view selected, and outline of the image data corresponding to the view is determined 116. In one embodiment, the outline determination may be based upon selecting the portion of the image data from one visible edge to the other in the selected view. Also, the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject is determined 118. In another embodiment, the portion of the set of visible boundaries in the selected viewpoint is determined thereby excluding the invisible boundaries. The determined outline and the visible portion of the set of boundaries are projected on a two-dimensional image plane 120.

In another embodiment, the invisible portion of the boundaries may also be depicted on a 2D image plane in a manner that distinguishes the invisible boundaries from the visible boundaries. One illustrative mechanism of distinguishing invisible boundaries from visible ones may involve use of dashed, dotted, or broken lines.

FIG. 2 depicts an illustrative embodiment 200 wherein a three-dimensional scanner 210 is used to scan and obtain three-dimensional model data 216 of a real world subject 212. The three-dimensional model data 216 is obtained by scanning the subject 212 from various directions and orientations 214. The image scanner 210 may be any known or future developed 3D scanner including but not limited to mobile devices, smart phones or tablets configured to scan and obtain three-dimensional model data.

FIG. 3 depicts an illustrative embodiment 300 wherein the three-dimensional model data of the real world subject is represented in the form of a point cloud 310. This point cloud representation may be further converted to a set or sets of continuous simple surfaces 312. As discussed previously herein, this conversion may be achieved by using a fitting method including but not limited to a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method. The simple surfaces used to represent the point cloud may be any simple geometric surface (polygonal and cylindrical surface in this case) including but not limited to planar surface, cylindrical surface, spherical surface, sinusoidal surface, or a conic surface. In one embodiment, a set of boundaries between the intersecting simple surfaces is determined using various methods known in the art including but not limited to a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method. In another embodiment, where a mesh model is used instead of point cloud, the boundaries may also be determined by finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

FIG. 4 depicts an illustrative embodiment 400 wherein the three-dimensional model data, represented as a set of continuous simple surfaces 312, is used for 2D image generation. A view of the set of continuous simple surfaces 312 is chosen and the determined outline and the portion of visible set of boundaries corresponding to the chosen view is projected on a two-dimensional image plane. For example the top view may be chosen and projected 412 or the front view 416 or side view 414 may be chosen and projected. Optionally, the invisible boundaries 418 may be depicted using dashed, dotted, or broken lines. Furthermore, because of the nature of the image data collected and reconstructed, it is possible to produce drawings having precise dimensions, such as the ones shown in FIG. 4 elements 412 and 414.

It is also contemplated to include a dimensional standard in the collected 3D model data so that drawings can be made to scale, i.e. a 1:1 scale, with the identical measurements of the real world object being modeled. For instance, in some embodiments the scanning device may be equipped with features for measuring its distance from the object being scanned, and may therefore be capable of accurately determining dimensions. Embodiments may also include the ability to manipulate scale, so that a drawing of a very large object can be rendered in a more manageable scale such as 1:10. It may further be advantageous to include dimensions on the 3D or 2D drawings produced according to embodiments of the invention in the form of annotations similar to those shown in FIG. 4 elements 412 and 414.

FIG. 5 depicts an embodiment 500 wherein a user device is illustrated, such device 510 with a capacitive touch screen 512 and interface may be configured to either carry out the method provided herein or to receive the 2D images and other related data using the method provided herein. The device 510 may be any device with computing and processing capabilities including but not limited to user mobile phones, tablets, smart phones and the like. The device 510 may be adapted to display the point cloud 310 of the scanned subject and the corresponding set of continuous simple surfaces 312. Also, the various views such as top view 412, side view 414 and front view 416 are also displayed on the screen 512 of the device 510. The device 510 may connect to a printing device 520 to enable physical printing of the 2D images and other related information. It will be understood that images may be stored in the form of digital documents as well, and that the invention is not limited to printed documents. The device 520 may be connected to the printing device 520 through a wire connection 518 or wirelessly 516. The wireless connection 516 with the printing device 520 may include Wi-Fi, Bluetooth or any other now known or future developed method of wireless connectivity. There may be contextual touch screen buttons 514 on the screen 512 of the device 510 that may be configured to carry out various actions like execute print command, zoom in/out, select different views of the set of continuous simple surfaces 312 etc.

FIG. 6 depicts an illustrative embodiment 600 of a physical print or digital document 610 of the 2D images obtained using the methods described herein. A two-dimensional representation of the set of continuous simple surfaces 312, various 2D images such as top view 412, side view 414 and front view 416 may be depicted in the document 610. The document 610 may also contain additional information in the form of notes 612 or annotations with respect to the 2D images, and a header 614 and footer 616 section. For instance, embodiments of the invention may include the ability to precisely measure the actual dimensions of an object being scanned, therefore, notes and annotations may include, without limitation, the volume of the object, the objects dimensions, its texture and color, its location as determined by an onboard GPS, time and date that the scan was taken, the operator's name, or any other data that may be convenient to store with the scan data. If the average density of the object is known even the weight of the object could be determined and displayed in notes.

It will be apparent to those skilled in the art that the above methods and apparatuses may be changed or modified without departing from the general scope of the invention. The invention is intended to include all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Having thus described the invention, it is now claimed:

Claims

1. A method for generating two-dimensional images, comprising the steps of:

providing a set of three-dimensional model data of a subject;
determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data;
selecting a view of the three-dimensional model data to convert to a two-dimensional image;
determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data; and
projecting the outline of the three-dimensional model data and a visible portion of the set of boundaries onto a two-dimensional image plane.

2. The method of claim 1, further comprising the step of determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject.

3. The method of claim 1, further comprising the step of projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

4. The method of claim 2, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

5. The method of claim 1, wherein the three-dimensional model data comprises a point cloud.

6. The method of claim 5, further comprising the step of converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method.

7. The method of claim 6, wherein a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface.

8. The method of claim 1, wherein the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible.

9. The method of claim 5, wherein the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method.

10. The method of claim 1, wherein the three-dimensional model data comprises a mesh.

11. The method of claim 10, wherein the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation.

12. A method for generating two-dimensional images, comprising the steps of:

providing a set of three-dimensional model data of a subject, wherein the three-dimensional model data comprises a point cloud;
converting the point cloud to a set of continuous simple surfaces using a fitting method selected from one or more of a random sample consensus (RANSAC) method, an iterative closest point method, a least squares method, a Newtonian method, a quasi-Newtonian method, or an expectation-maximization method, wherein a simple surface comprises a planar surface, a cylindrical surface, a spherical surface, a sinusoidal surface, or a conic surface;
determining a set of boundaries between intersecting the simple surfaces, wherein the step of determining a set of boundaries comprises a Kreveld method, a Dey Wang method, or an iterative simple surface intersection method;
selecting a view of the three-dimensional model data to convert to a two-dimensional image, wherein the step of selecting a view comprises orienting a three-dimensional model defined by the three-dimensional model data so that the planar bounded region with the largest convex hull is visible;
determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data;
determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and
projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.

13. The method of claim 12, further comprising projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

14. The method of claim 13, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

15. A method for generating two-dimensional images, comprising the steps of:

providing a set of three-dimensional model data of a subject, wherein the three-dimensional model data comprises a mesh;
determining a set of boundaries between intersecting surfaces of the set of three-dimensional model data, wherein the step of determining a set of boundaries comprises finding sharp angles between intersecting simple surfaces according to a dihedral angle calculation;
selecting a view of the three-dimensional model data to convert to a two-dimensional image;
determining an outline of the three-dimensional model data corresponding to the selected view of the three-dimensional model data;
determining the portion of the set of boundaries that would be invisible in the selected view due to opacity of the subject; and
projecting the outline of the three-dimensional model data and the visible portion of the set of boundaries onto a two-dimensional image plane.

16. The method of claim 15, further comprising projecting the invisible boundaries on the two-dimensional image plane in a form visually distinguishable from the visible boundaries.

17. The method of claim 16, wherein the form visually distinguishable from the visible boundaries comprises dashed, dotted, or broken lines.

Patent History
Publication number: 20150279087
Type: Application
Filed: Mar 27, 2015
Publication Date: Oct 1, 2015
Applicant: KNOCKOUT CONCEPTS, LLC (Columbus, OH)
Inventors: Stephen Brooks Myers (Shreve, OH), Jacob Abraham Kuttothara (Loudonville, OH), Steven Donald Paddock (Richfield, OH), John Moore Wathen (Akron, OH), Andrew Slatton (Columbus, OH)
Application Number: 14/671,420
Classifications
International Classification: G06T 15/20 (20060101); G06K 9/46 (20060101); G06T 17/10 (20060101);