DEVICE FOR DETECTING THE THREE-DIMENSIONAL GEOMETRY OF OBJECTS AND METHOD FOR THE OPERATION THEREOF

- A.TRON3D GMBH

A device for detecting the three-dimensional geometry of objects (9), in particular teeth, includes a handpiece (1) which is provided with at least one position sensor (12) for detecting the change of the spatial position of the handpiece (1), and an optical device (2) having at least one camera (5, 6) for capturing images and at least one light source (3) for at least one projector (4). The position sensor (12) in the handpiece (1) initially determines the size of the change of the spatial position of the device. It is determined therefrom, how many pictures the camera (5, 6) can take in a defined time unit.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description

The invention relates to a device for detecting the three-dimensional geometry of objects, in particular teeth, comprising a handpiece that has an optical device with at least one camera and at least one light source.

Furthermore, the invention relates to a method for the operation of a device for detecting the three-dimensional geometry of objects, in particular teeth, comprising a handpiece that has at least one position sensor for detecting the change in the spatial position of the handpiece and an optical device with at least one camera for capturing images and at least one light source for a projector.

A device of the type mentioned at the outset is, for example, known from AT 508 563 B. The scope of the invention extends in this case to recording digital tooth and jaw impressions, assistance during diagnosis, supervision of dental treatment, and reliable monitoring of deployed implants. In addition to further applications in the fields of medical and industrial technology, such as in endoscopy, objects that are difficult to access can also be stereometrically measured.

The use of a position sensor is known from, for example, U.S. Pat. No. 5,661,519 A.

The object of the invention is to improve such devices so that they are operated with the smallest possible power supply. A value of, for example, 500 mA or 900 mA is sought in this case.

With a device of the type mentioned at the outset, this object is accomplished in that the optical device has exclusively rigidly fixed components and that a means for generating light from the light source is provided in the handpiece.

With a method of the type mentioned at the outset, this object is accomplished in that the position sensor in the handpiece determines the size of the change in the spatial position of the device and that it is determined therefrom how many images the camera can take in a defined time unit.

By arranging the means of generating light directly in the handpiece, long optical paths (via fiber optic cable or multiple deflection mirrors, for example) are avoided. A distinction is made here between the light source, meaning anything that can emit light (the end of a fiber optic cable, for example) and the means of generating light (a laser or the semiconductor of an LED, for example).

By eliminating long optical paths, a means of generating light with lower power can be used in order to sufficiently illuminate the object—which results in a noteworthy conservation of energy.

The rigid assembly of all elements of the optical device means that it is impossible to focus the optics of the camera. All calibrations of the optical device therefore take place beforehand. It is particularly important here to achieve an optimal adjustment of the aperture. A smaller aperture is good in this case for a greater depth of field, while a larger aperture requires a smaller illumination for a sufficiently good image.

In the case of blurred areas in the 2D images, the issue is dealt with in two different ways. On the one hand, areas where the 2D images are blurry provide information about distance. In this way depth information can be gained from the degree of the blurriness based on previously ascertained information about the surface curvature. The blurry areas can therefore be utilized as further sources of information. On the other hand, blurry points, surfaces, lines, or the like can be drawn sharp and thus become part of the regular (stereometric, for example) process of extracting three-dimensional data.

For this purpose the scanner, in the course of calibration, is arranged, for example, at various known distances over a flat plane. Distances that change in steps of 50 μm each have proven to be particularly suitable for this purpose. Other distances may also be used for calibration. In general, one skilled in the art can be guided in choosing the distances or their changes by the resolution of the means used to capture the two-dimensional images. The better changes in the captured two-dimensional image can be recognized, the less minor changes in the distances between the scanner and the plane are meaningful during calibration.

Actually detecting the three-dimensional geometry of objects is therefore preferably prefaced by taking calibration images of a preferably flat surface at various known distances from the scanner. The distances vary thereby in steps of preferably 50 μm. Furthermore, the central axes of the field angle of the camera while taking the calibration images are preferably aligned essentially normal to the flat surface.

For each distance a mean brightness profile is saved of the lightest to darkest areas of the points, surfaces, lines, and the like. During later processing of blurry two-dimensional images, it is no longer necessary to start with a statistical brightness profile in order to clearly define the images; rather, it is possible to read probable edges from an empirical table created during calibration. The edges selected during sharpening therefore have a much higher accuracy than edges chosen by conventional processes. For a brightness profile in a two-dimensional image, a brightness profile as similar as possible is chosen in the table; thanks to this, it is possible, prior to the actual analysis of the two-dimensional images, to estimate how far the area in question is from the camera since different distances were recorded in the table for different brightness profiles during calibration.

Calibration images in which the central axes of the field angle of the camera are tilted at certain angles to the surface are also conceivable.

In an especially preferred embodiment, the device has a facility for synchronizing the power supply of the light source and the camera. In this way the camera and light source are operated synchronously commensurate to a preferred implementation of the method. By using pulses of light, point-large outputs can be achieved with comparatively low energy input. In this embodiment of the invention, the energy supply is also interrupted on the imaging end. In this way, unlit images are avoided, and additional energy is saved.

In another preferred embodiment, the handpiece has at least one position sensor (especially an acceleration sensor), magnetic field sensor, and/or inclination sensor. Using this/these units, the size of the change in the spatial position of the device is determined according to the method; from this, it is determined how many images should be made by the camera in a defined time unit. In this way it is possible to avoid taking more images, upon slight movement, of the same place than is necessary for optimal capturing of the geometry.

In this sense the frame rate of the captured images in a preferred implementation can be changed; preferably, the frame rate is between 1 and 30 images per second. Additionally, or alternatively, the frame rate can, according to a preferred implementation of the method, also be adjusted depending on whether a larger or smaller power supply is available. In the case of a larger power supply, more light pulses can thus be emitted and received than in the case of a smaller power supply.

In a potential embodiment of the invention, it can be additionally determined how many images of a defined area are recorded. Using this value, a quality can be assigned to a captured area of the object. This quality can optionally be reproduced in the 3D representation of the geometry of the object, so that the user can react to it. Areas from which only a small amount of data was captured—and which thus have a greater potential for deviations from the geometry of the object can, for example, be displayed in red. Areas in which the number of images is already sufficient for the desired quality can, for example, be displayed in green. Additional colors for intermediate stages are likewise conceivable for areas in which an optimal value has already been reached—meaning further images would not improve the recorded data in any substantial way. Naturally, it is also possible to color only the areas that have a lower quality.

In the interests of energy conservation, it may be determined, according to an additional or alternative procedure step for a defined area, how many images of this area have already been made. Upon reaching a defined number of images, no further images of this area are made. Furthermore, this measure is suitable for optimizing the necessary processing steps in a processing unit that processes the recorded data, and for conserving computing power.

In a preferred embodiment, the optical device has at least one projector for projecting patterns. Projecting patterns improves the possibilities of detecting the three-dimensional geometry.

In a furthermore preferred embodiment, the field angle of the camera and the field angle of the project overlap by at least 50%, preferably at least 80%, and especially preferably at least 90%. The field angle is the conical area in which the projection or recording takes place. By having an overlap that is as large as possible, the largest possible amount of energy expended is utilized.

In a preferred embodiment the device optionally has a rechargeable electrical energy storage system. This energy storage system can, according to the invention, fulfill multiple functions.

On the one hand, the storage system can, in a preferred embodiment, serve as the sole energy source of the device. In this case it is sensible for the device to additionally have a data storage system or a way of providing for wireless data transfer. The device can thus, without cables, be moved completely freely. In an embodiment in which the data is saved, it is appropriate to combine the subsequent transfer of data (via a USB connection, for example) with the charging of the energy storage system.

Alternatively, the energy storage system can, according to the invention, be an auxiliary power source of the device. This auxiliary power source can be activated when necessary. For this purpose, it is initially determined, according to a preferred method, how much electricity is available to the device. In the embodiment example it is particularly provided that it be determined whether 500 mA or 900 mA is available to the device—that is, whether the device is connected to a USB 2.0 port or a USB 3.0 port. Should one desire to operate the device in a mode that requires a 900 mA power supply but only have access to a 500 mA power supply, the energy storage system is, according to the method, provided as an additional energy source. Similarly, a power supply of, for example, 500 mA or 900 mA can analogously be implemented when connected to a low-power USB port, which is typically powered by 100 mA.

Alternatively or additionally, it can, in a further preferred embodiment of the invention, be determined from the ascertained value of the power supply available whether the device should optionally be operated with two or three or more cameras. In this way, different modes of operation are created for different outputs of the power supply. Preferably, for example, two cameras are operated in a mode of operation for 500 mA and three or more cameras in a mode of operation for 900 mA.

In an especially preferred implementation of the method, the data gathered by the camera is forwarded without further processing or conditioning to a processing unit or a storage medium. In this way, it is possible to eliminate completely the energy input that would otherwise be required for a processor or chip that normally performs this processing or conditioning. Further processing in the processing unit can take place at least partially in the CPU; however, it has been found that it is useful (especially with regard to data processing speed) to process a part of the data gathered for detecting or calculating the three-dimensional geometry in the GPU. In this way it is possible to convert the data, especially two-dimensional images taken by the cameras, directly into a three-dimensional representation on a display or into a file (a 3D file in STL format, for example) available on a storage medium without any appreciable delay.

The device can, according to a preferred embodiment, have a thermovoltaic element. Using this element, electric energy can, according to a preferred implementation of the method, be obtained from the heat that is produced during operation. On the one hand, this energy can then be directly used for operating the device; on the other hand, an energy storage system can be supplied with the energy obtained, especially during device cool-down.

Additional preferred embodiments and implementations of the invention are the subject matter of the remaining dependent claims.

The invention will be subsequently further explained with reference to the drawings.

FIG. 1 shows a schematized representation of an embodiment of the invention and

FIG. 2 shows a schematic view of the underside of an embodiment of the invention.

FIG. 1 shows an example embodiment of the device comprising a handpiece 1, in which there is an optical device 2, which comprises a light source 3, a projector 4, a first camera 5, a second camera 6, and a mirror 7. In front of the mirror, there is a recess in the housing 15 of the handpiece 1. This recess is provided with a transparent cover 13 for hygienic reasons and to protect the sensitive components in the handpiece 1.

In this embodiment the light source 3 is an LED. A means for generating the light (not shown in the drawing) is located, in this embodiment example, right in the light source 3 in the form of a semiconductor. The subsequent pathway of the light inside and outside of the device is depicted by an example light beam 8.

This beam initially passes through the projector 4. The projector 4 serves thereby to project patterns onto the object. These may be, depending on the type of capture of the geometry of the object, both regular patterns, such as stripes, and irregular patterns, such as irregular dot patterns.

After the projector 4, the light beam 8 encounters the mirror 7 and is deflected by it onto the object 9 whose geometry is to be captured. In the embodiment example depicted, the object 9 is a tooth. In an embodiment not shown in the drawing, in which the light source 3 and the projector 4 are already aligned in the direction of the object, the mirror 7 is unnecessary.

The cameras 5, 6 record the pattern that is projected onto the tooth 9, from which pattern the geometry of the tooth 9 will later be calculated. According to a preferred implementation all corresponding calculations take place in a processing unit outside of the handpiece 1, whereby the power consumption of internal chipsets or processors is minimized. The device may be connected to this processing unit both physically by a cable 14 and wirelessly. In the embodiment example, a wireless connection (Bluetooth or WLAN, for example) is provided. For this purpose there is a wireless data transfer means 10 in the handpiece, in particular a transmitter and optionally a receiver.

Furthermore, an energy storage system 11 (optionally rechargeable) is provided in the handpiece 1. In the depicted embodiment example, this serves as an auxiliary power supply of the device. The cable connected to the handpiece 1 may, however, also be completely eliminated; this offers optimal freedom of movement.

Furthermore, the drawing shows a position sensor 12. Using this sensor, the size of the spatial movement of the handpiece 1 can be determined. For this purpose, the position sensor 12 can, for example, be an acceleration sensor, a terrestrial magnetic field sensor, or an inclination sensor. Combinations of different sensor types increase the precision with which the change of the spatial position or the movement of the handpiece 1 is determined.

FIG. 2 shows a schematic view of the underside of an embodiment of the invention. Two areas 16, 17 in which a thermovoltaic element could be placed are shown.

In the first area 16, the thermovoltaic element is arranged directly on the underside (meaning the side on which the cover 13 is located) in proximity to the optical device 2. This is advantageous because the optical device 2, especially the projector 4, produces the most heat during operation. In this way, this heat can be utilized with as little loss as possible.

Placing the thermovoltaic element in the second area 17 is advantageous because the element can be sized larger; in this case, however, a heat conductor that directs the heat from the optical device 2 to the thermovoltaic element is necessary. Even when the thermovoltaic element is positioned in the second area 17, attachment to the underside of the handpiece 1 makes sense; by so doing, a side of the thermovoltaic element that faces outward (according to a preferred embodiment of the invention) and gives off heat is not covered by the hand of the user.

Claims

1-30. (canceled)

31. Method for operating a device for detecting the three-dimensional geometry of objects, in particular teeth, with a scanner with a handpiece (1) with at least one optical device (2) with at least one camera (5, 6) for capturing images and at least one light source (3), characterized in that, before the three-dimensional geometry of objects is detected with the scanner, calibration images of a preferably flat surface are taken at different known distances.

32. Method according to claim 31, characterized in that the central axes of the field angle of the cameras are oriented essentially normal to the flat surface when taking the calibration images.

33. Method according to claim 31, characterized in that the central axes of the field angle of the camera are tilted at known angles to the surface when taking the calibration images.

34. Method according to claim 31, characterized in that brightness profiles determined when taking the calibration images (and dependent on the distances) are recorded in a table along with empirical values of the edges of the pattern.

35. Method according to claim 34, characterized in that the table is used when sharpening two-dimensional images of the camera (5, 6) in the course of detecting the three-dimensional geometry.

36. Method according to claim 31, characterized in that a pattern is projected onto the object with a projector (4).

37. Method according to claim 32, characterized in that brightness profiles determined when taking the calibration images (and dependent on the distances) are recorded in a table along with empirical values of the edges of the pattern.

38. Method according to claim 33, characterized in that brightness profiles determined when taking the calibration images (and dependent on the distances) are recorded in a table along with empirical values of the edges of the pattern.

39. Method according to claim 32, characterized in that a pattern is projected onto the object with a projector (4).

40. Method according to claim 33, characterized in that a pattern is projected onto the object with a projector (4).

41. Method according to claim 34, characterized in that a pattern is projected onto the object with a projector (4).

42. Method according to claim 35, characterized in that a pattern is projected onto the object with a projector (4).

Patent History
Publication number: 20150002649
Type: Application
Filed: Feb 2, 2013
Publication Date: Jan 1, 2015
Applicant: A.TRON3D GMBH (Klagenfurt am Worthersee)
Inventors: Christoph Nowak (Wien), Horst Koinig (Klagenfurt), Jurgen Jesenko (Finkenstein)
Application Number: 14/376,187
Classifications
Current U.S. Class: Human Body Observation (348/77)
International Classification: G01B 11/02 (20060101); H04N 7/18 (20060101); G06T 7/00 (20060101);