Method for processing three-dimensional point cloud data
A method for processing three-dimensional point cloud data includes a data creation step, a layering step, a gridding step, a data processing step and a two-dimensional image generation step, so that the three-dimensional point cloud data can be converted into a two-dimensional image, and the two-dimensional image can correspond to, identify and store the axial depth and information point features of the point cloud data in three axes.
Latest NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGY Patents:
- Traceability management method for supply chains of agricultural, fishery and animal husbandry products
- Electrode for electrolysis and electrolysis device and pumping device using the same
- Fabrication and structure of a nonenzymatic glucose sensor
- Buoy position monitoring method and buoy position monitoring system
- Method and system for correcting infant crying identification
The present invention relates to three-dimensional point cloud data, and more particularly to a method for processing three-dimensional point cloud data.
BACKGROUND OF THE INVENTIONIn recent years, there are more and more related researches on self-driving cars for judging pedestrians and vehicles. In general, self-driving cars use Light Detection and Ranging (LiDar) sensors to obtain information around the cars. Therefore, three-dimensional object recognition technology has become the main research for self-driving cars. LiDar is used to obtain the outline of an object. The obtained contour is the contour of the object viewed by the LiDar, not a complete object model. If data processing is performed only for a certain dimension, it may reduce the three-dimensional information of point cloud data, further reducing the accuracy of model training and testing. For maintaining the basic information of a three-dimensional object, the contour characteristics of the LiDar should be considered so that the characteristics of the three-dimensional object can be kept through the calculation of three-dimensionality. The neural network technology for image judgment is approaching maturity. However, the three-dimensional object recognition technology converts three-dimensional data to two-dimensional data for identification. In the conversion process, depth information may be lost, which may cause inaccurate identification results. For example, Convolutional Neural Network (CNN) has excellent performance in identifying images. However, according to the architecture of the convolutional neural network, the input model must be two-dimensional data. The three-dimensional point cloud data cannot be directly input into the architecture of the convolutional neural network for identification. The three-dimensional point cloud data must be compressed into two dimensional data before being processed. However, if the three-dimensional point cloud data is compressed and converted into two-dimensional data, the information points and depth may be lost, which is the main problem encountered in the three-dimensional point cloud identification.
Accordingly, the inventor of the present invention has devoted himself based on his many years of practical experiences to solve these problems.
SUMMARY OF THE INVENTIONThe primary object of the present invention is to provide a method for processing three-dimensional point cloud data. When the three-dimensional point cloud data is converted into two-dimensional data, the axial depth of the point cloud data in three axes can be identified.
In order to achieve the aforesaid object, the present invention provides a method for processing three-dimensional point cloud data, comprising a data creation step, creating a three-dimensional coordinate and corresponding point cloud data to the three-dimensional coordinate, the three-dimensional coordinate having three axes, the point cloud data having a plurality of information points, the information points including a plurality of general information points, a space where the information points are located forming a data block at the three-dimensional coordinate; a layering step, dividing the data block into a plurality of data layers arranged in order along at least one of the axes, through the layering step, axial depths of the point cloud data and the data layers being identifiable.
The method for processing three-dimensional point cloud data provided by the present invention can identify the axial depth of the point cloud data in three axes through the data creation step and the layering step.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings.
Referring to
In the data creation step S1, referring to
In the layering step S2, referring to
In the gridding step S3, each data layer 23 is gridded into a plurality of grids 231.
In the data processing step S4, the number of information points 21 included in each grid 231 is calculated, and the number of information points 21 in each grid 231 is converted into a grid value according to a conversion rule. Wherein, in the first embodiment of the present invention, the grid value is in the range of 0 to 255. The conversion rule can scale the minimum value to the maximum value of the number of information points 21 in each grid 231 to the grid value in the range of 0 to 255. Wherein, when the grid value of each grid 231 is calculated, the grid 231 is defined as a target grid 27. The grid value of the target grid 27 is defined as a target grid value. The data layer 23 where the target grid 27 is located is defined as a target data layer 28. The conversion rule is represented by the following Mathematical Formula 1, Vtarget=(Ntarget/Nmax)*255 . . . Formula (1). Wherein, Vtarget is the target grid value, and Ntarget is the number of information points 21 of the target grid 27, and Nmax is the maximum value of the number of information points 21 in all of the grids 231 of the target data layer 28. The target grid value is rounded.
In the two-dimensional image generation step S5, referring to
In the first embodiment of the present invention, for a clear explanation, this paragraph only takes the Y-axis data layer 25 of the data block 22 of the point cloud data 20 as an example. Because the process is the same, the illustration and description of the X-axis data layer and the Z-axis data layer are omitted. Referring to
For a clear explanation, in this paragraph, the two-dimensional grayscale layer 42 is provided with the Y-axis pixel area 422 corresponding to a single Y-axis data layer 25, as an example. Because the process is the same, the illustration and description of the X-axis data layer and the Z-axis data layer are omitted. The pixels 4221 and the pixel values of the Y-axis pixel area 422 correspond to the grids 251 and the grid values of the Y-axis data layer 25, as shown in
For a clear explanation, in this paragraph, the two-dimensional grayscale layer 42 is provided with the Y-axis pixel areas 422 corresponding to the Y-axis data layers 25, as an example. Because the process is the same, the illustration and description of the X-axis data layer and the Z-axis data layer are omitted. When the two-dimensional grayscale layer 42 is provided with the Y-axis pixel area 422 corresponding to each Y-axis data layer 25, the pixels 4221 and the pixel values of the Y-axis pixel areas 422 correspond to the grids 251 and the grid values of the Y-axis data layers 25, and the two-dimensional grayscale layer 42 corresponds to the axial depth information of the Y-axis data layer 25 of the point cloud data 20 and the information point features of the grids 251, as shown in
Referring to
Please refer to
As shown in
As shown in
According to the method for processing the three-dimensional point cloud data 20 of the present invention, through the data creation step S1, the layering step S2, the gridding step S3, the data processing step S4 and the two-dimensional image generation step S5, the three-dimensional point cloud data 20 can be converted into a two-dimensional image to correspond to, identify and store the axial depth and information point features of the point cloud data 20 in the three axes for identification by the architecture of the convolutional neural network.
Although particular embodiments of the present invention have been described in detail for purposes of illustration, various modifications and enhancements may be made without departing from the spirit and scope of the present invention. Accordingly, the present invention is not to be limited except as by the appended claims.
Claims
1. A method for processing three-dimensional point cloud data, comprising:
- a data creation step, creating a three-dimensional coordinate and corresponding point cloud data to the three-dimensional coordinate, the three-dimensional coordinate having three axes, the point cloud data having a plurality of information points, the information points including a plurality of general information points, a space where the information points are located forming a data block at the three-dimensional coordinate;
- a layering step, dividing the data block into a plurality of data layers arranged in order along at least one of the axes, through the layering step, axial depths of the point cloud data and the data layers being identifiable; and
- a gridding step, each data layer being gridded into a plurality of grids;
- a data processing step, calculating the number of information points included in each grid and converting the number of information points in each grid into a grid value according to a conversion rule;
- a two-dimensional image generation step, setting a two-dimensional pixel layer, the two-dimensional pixel layer being composed of a plurality of pixels, each pixel having a pixel value, the pixels and the pixel values of the pixels of the two-dimensional pixel layer corresponding to the grids and the grid values of the grids of each data layer of the at least one axis for the two-dimensional pixel layer to generate a two-dimensional image.
2. The method as claimed in claim 1, wherein the three axes are defined as an X-axis, a Y-axis and a Z-axis, the data block is divided into the plurality of data layers along the X-axis, the Y-axis and the Z-axis respectively, the data layers of the data block, divided along the X-axis, are defined as X-axis data layers, the data layers of the data block, divided along the Y-axis, are defined as Y-axis data layers, and the data layers of the data block, divided along the Z-axis, are defined as Z-axis data layers; and the two-dimensional pixel layer and the two-dimensional image is identified by a convolutional neural network.
3. The method as claimed in claim 2, wherein the two-dimensional pixel layer is a two-dimensional grayscale layer, the pixels of the two-dimensional grayscale layer are grayscale pixels, the pixel values of the pixels of the two-dimensional grayscale layer are grayscale values, the two-dimensional grayscale layer is provided with X-axis pixel areas corresponding to the respective X-axis data layers, the pixels and the pixel values of the X-axis pixel areas correspond to the grids and the grid values of the X-axis data layers, the two-dimensional grayscale layer is provided with Y-axis pixel areas corresponding to the respective Y-axis data layers, the pixels and the pixel values of the Y-axis pixel areas correspond to the grids and the grid values of the Y-axis data layers, the two-dimensional grayscale layer is provided with Z-axis pixel areas corresponding to the respective Z-axis data layers, the pixels and the pixel values of the Z-axis pixel areas correspond to the grids and the grid values of the Z-axis data layers.
4. The method as claimed in claim 2, wherein the two-dimensional pixel layer is a two-dimensional RGB pixel layer, the pixels of the two-dimensional RGB pixel layer are RGB pixels, the pixels of the two-dimensional RGB pixel layer each have an R value, a G value and a B value, the two-dimensional RGB pixel layer has a plurality of RGB pixel areas, the RGB pixel areas correspond to the X-axis data layers, the pixels and the R values of the pixels of the RGB pixel areas correspond to the grids and the grid values of the X-axis data layers, the RGB pixel areas correspond to the Y-axis data layers, the pixels and the G values of the pixels of the RGB pixel areas correspond to the grids and the grid values of the Y-axis data layers, the RGB pixel areas correspond to the Z-axis data layers, the pixels and the B values of the pixels of the RGB pixel areas correspond to the grids and the grid values of the Z-axis data layers.
5. The method as claimed in claim 1, wherein the grid value is in the range of 0 to 255, the conversion rule scales a minimum value to a maximum value of the number of information points in each grid to the grid value in the range of 0 to 255.
6. The method as claimed in claim 5, wherein when the grid value of the grid is calculated, the grid is defined as a target grid, the grid value of the target grid is defined as a target grid value, the data layer where the target grid is located is defined as a target data layer, the conversion rule is represented by the following Mathematical Formula 1, Vtarget=(Ntarget/Nmax)*255... Formula (1), wherein Vtarget is the target grid value, and Ntarget is the number of information points of the target grid, and Nmax is the maximum value of the number of information points in all of the grids of the target data layer, and the target grid value is rounded.
7. The method as claimed in claim 1, wherein the information points further include a plurality of supplementary information points, and each of the supplementary information points is interposed between every adjacent two of the general information points.
8. The method as claimed in claim 1, wherein the point cloud data is rotatable and displaceable at the three-dimensional coordinate.
20210049397 | February 18, 2021 | Chen |
20210213973 | July 15, 2021 | Carillo Pena |
Type: Grant
Filed: Apr 21, 2020
Date of Patent: May 31, 2022
Patent Publication Number: 20210272301
Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGY (Yunlin)
Inventors: Chien-Chou Lin (Kaohsiung), Kuan-Chi Lin (Taipei)
Primary Examiner: Xiao M Wu
Assistant Examiner: Scott E Sonners
Application Number: 16/854,744