Dental Image Feature Detection
A system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include receiving data representing one or more images of dental information associated with a patient. Operations include adjusting the data representing the one or more images of dental information into a predefined format, wherein adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information. Operations include using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information, and producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
This description relates to using machine learning methods to analyze and detect features, i.e. dental pathologies, in (dental) radiographs.
Dental radiographs are one diagnostic tool in dentistry. Dentists may have limited training in reading radiographs and little support from e.g. an additional radiological department, assisting them in their diagnosis. Due to such large volume of radiograph data and limited analysis time, false negative and false positive errors may occur and could potentially lead to health risks and increased health costs due to missed detection or false treatment.
SUMMARYThe described systems and techniques can aid dental clinicians in their ability to interpret dental images, including but not limited to intra-extra oral radiographic imaging (e.g. bitewing and periapical radiographs), extra-oral radiographic imaging (e.g. panoramic x-ray), computed tomography scan (CT-scans) coming from a CT scanner, Positron emission tomography scan (PET-scans) coming from a Positron emission tomography-computed tomography scanner and Magnetic resonance imaging (MRI) scans coming from a MRI scanner, to correctly identify pathological lesions. By highlighting the potential features of interest, including but not limited to potential suspicious radiolucent lesions and potential carious lesions (also called cavities) and other pathological areas, the viewer of the radiograph can quickly recognize these detected features to reduce the number of missed lesions (false negatives) and wrongly identified lesions (false positives). By employing machine learning techniques and systems to analyze radiographs, which are presentable on displays, electronic or printed reports, etc., an evaluation of patient health condition can be efficiently provided, thereby allowing the dental professional to make an informed decision about treatment decisions. While many methodologies can be employed for pathology detection in dentistry, artificial intelligence techniques, such as deep learning algorithms, can exploit such radiographs, the images information, for training and evaluation in an effective way. By developing such techniques, the diagnostic errors in dentistry can be reduced, pathologies can be detected earlier, and the health of the patients can be improved.
In one aspect, a computing device implemented method includes receiving data representing one or more images of dental information associated with a patient. The method also includes adjusting the data representing the one or more images of dental information into a predefined format. Adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information. The method also includes using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information, and producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
Implementations may include one or more of the following features. The method may further include transferring data representative of the one or more images of dental information associated with the patient to one or more networked computing devices for statistical analysis. The machine learning system may employ a convolution neural network. The machine learning may be trained with dental imagery and associated annotations. One or more annotations may be produced for each of the images of dental information. The one or more detected features may include a radiolucent lesion or an opaque lesion. The produced representation may include a graphical representation that is presentable on a user interface of the computing device. The produced representation may be used for a diagnosis and treatment plan. An alert or recommendation may be produced by using the produced representation for the diagnosis and treatment plan.
In another aspect, a system includes a computing device that includes a memory configured to store instructions. The system also includes a processor to execute the instructions to perform operations that include receiving data representing one or more images of dental information associated with a patient. Operations also include adjusting the data representing the one or more images of dental information into a predefined format. Adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information. Operations also include using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information, and producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
Implementations may include one or more of the following features. Operations may further include transferring data representative of the one or more images of dental information associated with the patient to one or more networked computing devices for statistical analysis. The machine learning system may employ a convolution neural network. The machine learning may be trained with dental imagery and associated annotations. One or more annotations may be produced for each of the images of dental information. The one or more detected features may include a radiolucent lesion or an opaque lesion. The produced representation may include a graphical representation that is presentable on a user interface of the computing device. The produced representation may be used for a diagnosis and treatment plan. An alert or recommendation may be produced by using the produced representation for the diagnosis and treatment plan.
In another aspect, one or more computer readable media storing instructions that are executable by a processing device, and upon such execution cause the processing device to perform operations that include receiving data representing one or more images of dental information associated with a patient. Operations also include adjusting the data representing the one or more images of dental information into a predefined format. Adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information. Operations also include using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information, and producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
Implementations may include one or more of the following features. Operations may further include transferring data representative of the one or more images of dental information associated with the patient to one or more networked computing devices for statistical analysis. The machine learning system may employ a convolution neural network. The machine learning may be trained with dental imagery and associated annotations. One or more annotations may be produced for each of the images of dental information. The one or more detected features may include a radiolucent lesion or an opaque lesion. The produced representation may include a graphical representation that is presentable on a user interface of the computing device. The produced representation may be used for a diagnosis and treatment plan. An alert or recommendation may be produced by using the produced representation for the diagnosis and treatment plan.
These and other aspects, features, and various combinations may be expressed as methods, apparatus, systems, means for performing functions, program products, etc.
Other features and advantages will be apparent from the description and the claims.
Referring to
In one implementation, the dental analysis system cannot only be used prospectively but also retrospectively such as by analyzing retrospectively data, e.g., patient records of a dental practice and hospital and matching it with the analyzed diagnoses and treatment recommendations of the record, e.g., in the practice management system or the electronic health record, to estimate the quality of the dental practice and analyze if a potential recall of patients is necessary as dental features, e.g., carious lesions or other pathologies, have been missed.
The dental analysis system can also provide information such as transactional information to a payor, e.g., the health insurance, when submitting a claim. By algorithmically detecting features on the dental image and associated dental image information, the system may provide a probability factor that the diagnosis and recommended treatment of the dentist is accurate and thereby help the payor to detect various types of events (e.g., potential fraud) and conduct any additional analysis.
Upon one or more features being detected from a representation of the analyzed data, the detected features can assist in the execution of several functions such as 1) an assistive tool for the user, e.g., the dentist, to support his or her diagnosis and reduce false positive and false negative errors, 2) as a second opinion to a patient regarding their health conditions and to provide transparency to the diagnosis of the user, the dentist, the patient, etc. or 3) as an education tool for continuing education of dental professionals, dental students, etc.
The imaging machine, 102, which emits x-ray beams, 104, to an x-ray sensor, 106 can be part of an intra-extra oral radiographic imaging machine (e.g. that produces bitewing and periapical radiographs), an extra-oral radiographic imaging machine (e.g. that produces panoramic x-ray), a dental cone beam computed tomography scan machine for CT-scans coming from a CT scanner (also called a CBCT-scanner), not radiology-emitting machines such as Positron emission tomography scan (PET-scans) coming from a Positron emission tomography-computed tomography scanner, Magnetic resonance imaging (MRI) scans coming from a MRI scanner, etc.
Referring to
Referring to
Referring to
Referring to
In production phase, the Input 516 is typically an image (or a set of images) without any annotation. This image is usually processed with the same Image Preprocessing Module 504 that is used in Machine Learning Trainer 408. Then, without any further processing, the image is fed to the Trained Model 514 and the model predict the target output (e.g., a bounding box, a heatmap, or a binary mask) for any present detected feature. These intermediate outputs are put together and superimposed on the original input image in Postprocessing 518 and results in the Output 520 that can be rendered on the users' workstation.
To train the machine learning trainer 408 and implement algorithms into the machine learning inference 412, one or more machine learning techniques may be employed. For example, supervised learning techniques may be implemented in which training is based on a desired output that is known for an input. Supervised learning can be considered an attempt to learn a nonlinear function that maps inputs to outputs and then estimate outputs for previously unseen inputs (a newly introduced input). Depending on the desired output, these supervised learning methods learn different nonlinear functions and perform different tasks. The output can be just a text or alarm that signal the presence or absence of a lesion or any other feature of interest like number of teeth. This task is being done by classification methods, but if the output is a continuous value like the size of a cavity, regression methods are being used. On the other hand, the output can be a visual feature, like the delineation of a tooth or a lesion or just a box that includes that tooth or lesion. Using exact delineation of a feature of interest as the output, we can employ segmentation algorithms to perform the supervised learning task. When boxes that are superimposed on the input images, called bounding boxes, are used as the desired output, the object detection algorithms are employed. Unsupervised learning techniques may also be employed in which training is provided from known inputs but unknown outputs. Dimensionality reduction methods are example such techniques that tries to find patterns in the data and can create a more compact representation of the image. This compact representation then can be correlated to certain features of interest. Reinforcement learning techniques may also be used in which the system can be considered as learning from consequences of actions taken (e.g., inputs values are known). This can be mainly used for dental treatment planning, like orthodontics treatment, to learning the optimal treatment strategy. In some arrangements, the implemented technique may employ two or more of these methodologies. In some arrangements, neural network techniques may be implemented using the data representing the images (e.g., a matrix of numerical values that represent visual elements such as pixels of an image, etc.) to invoke training algorithms for automatically learning the images and related information. Such neural networks typically employ a number of layers. Once the layers and number of units for each layer is defined, weights and thresholds of the neural network are typically set to minimize the prediction error through training of the network. Such techniques for minimizing error can be considered as fitting a model (represented by the network) to training data. By using the image data (e.g., attribute vectors), a function may be defined that quantifies error (e.g., a squared error function used in regression techniques). By minimizing error, a neural network may be developed that is capable of determining attributes for an input image. One or more techniques may be employed by the machine learning system (the machine learning trainer 408 and machine learning system 412), for example, backpropagation techniques can be used to calculate the error contribution of each neuron after a batch of images is processed. Stochastic gradient descent, also known as incremental gradient descent, can be used by the machine learning system as a stochastic approximation of the gradient descent optimization and iterative method to minimize a loss function. Other factors may also be accounted for during neutral network development. For example, a model may too closely attempt to fit data (e.g., fitting a curve to the extent that the modeling of an overall function is degraded). Such overfitting of a neural network may occur during the model training and one or more techniques may be implements to reduce its effects. Other types of artificial intelligence techniques may be employed about the identifier 314 (shown in
Other forms of artificial intelligence techniques may be used by the machine learning trainer 408 and machine learning inference 412. For example, to process information (e.g., images, image representations, etc.) to identify detected features of the x-ray image, such as potential cavities and periapical radiolucencies, the architecture may employ decision tree learning that uses one or more decision trees (as a predictive model) to progress from observations about an item (represented in the branches) to conclusions about the item's target (represented in the leaves). In some arrangements, random forests or random decision forests are used and can be considered as an ensemble learning method for classification, regression and other tasks. Such techniques generally operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Support vector machines (SVMs) can be used that are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Ensemble learning systems may also be used for detecting features in dental images in which multiple system members independently arrive at a result. The ensemble typically comprises not only algorithms with diverse architectures, but also algorithms trained on multiple independent data sets. In one arrangement, a convolutional neural network architecture can be used that is based on U-Net to perform image segmentation to identify detected features, e.g. radiolucent lesions and carious lesions on the dental x-ray images. This implementation of the network uses batch-normalization after each convolutional layer has a tunable depth. The network parameters (weights) are trained using the Jaccard Index metric as a loss function, where true positive, false positive and false negative counts are measured across all images in a batch/mini-batch. The algorithm assigns a probability (e.g. number ranging from 0 to 1, where a larger value is associated with greater confidence) that a pathology exists to each pixel in the x-ray image, which can be post-processed into various non-graphical or graphical forms (e.g. see 208). The algorithm is trained using data augmentation of the images and ground truth regions, for example one or more of rotations, scaling, random crops, translations, image flips, and elastic transformations; the amount of augmentation for each transformation is tuned to optimize performance of the algorithm on the available data. System members can be of the same type (e.g., each is a decision tree learning machine, etc.) or members can be of different types (e.g., one Deep CNN system, one SVM system, one decision tree system, etc.). Upon each system member determining a result, a majority vote among the system members is used (or other type of voting technique) to determine an overall prediction result. In some arrangements, one or more knowledge-based systems such as an expert system may be employed. In general, such expert systems are designed by solving relatively complex problems by using reasoning techniques that may employ conditional statements (e.g., if-then rules). In some arrangements such expert systems may use multiple systems such as a two sub-system design, in which one system component stores structured and/or unstructured information (e.g., a knowledge base) and a second system component applies rules, etc. to the stored information (e.g., an inference engine) to determine results of interest (e.g., select images likely to be presented).
System variations may also include different hardware implementations and the different uses of the system hardware. For example, multiple instances of the machine learning system identifier 314 may be executed through the use of a single graphical processing unit (GPU). In such an implementation, multiple system clients (each operating with one machine learning system) may be served by a single GPU. In other arrangements, multiple GPU's may be used. Similarly, under some conditions, a single instance of the machine learning system may be capable of serving multiple clients. Based upon changing conditions, multiple instances of a machine learning system may be employed to handle an increased workload from multiple clients. For example, environmental conditions (e.g., system throughput), client-based conditions (e.g., number of requests received per client), hardware conditions (e.g., GPU usage, memory use, etc.) can trigger multiple instances of the system to be employed, increase the number of GPU's being used, etc. Similar to taking steps to react to an increase in processing capability, adjustments can be made when less processing is needed. For example, the number of instances of a machine learning system being used may be decreased along with the number of GPU's needed to service the clients. Other types of processors may be used in place of the GPU's or in concert with them (e.g., combinations of different types of processors). For example, central processing units (CPU's), processors developed for machine learning use (e.g., an application-specific integrated circuit (ASIC) developed for machine learning and known as a tensor processing unit (TPU)), etc. may be employed. Similar to GPU's one or more models may be provided by these other types of processors, either independently or in concert with other processors.
Referring to
Referring to
Operations of the identifier include receiving 1002 data representing one or more images of dental information associated with a patient. For example, one or multiple radiographic images may be received that contain dental information about a patient or multiple patients (e.g., jaw and teeth images). Operations also include adjusting 1004 the data representing the one or more images of dental information into a predefined format. For example, raw imagery may be processed to being represented in a DICOM format or other time of image format. Adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information. For example, imagery, information associated with the images, etc. may be filtered or processed in other manners. Operations also include using 1006 a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information. For example, a confidence score (e.g., having a numerical value from 0 to 1) can be assigned to each pixel associated with a dental image that reflects the presence of a feature (e.g., e.g., carious lesions and periapical lucencies). Operations also include producing 1008 a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information. For example, graphical representation (e.g., colored bounding boxes) may be presented on a graphical interface to represent the certainty score and alert the viewer to the detected features.
Computing device 1100 includes processor 1102, memory 1104, storage device 1106, high-speed interface 1108 connecting to memory 1104 and high-speed expansion ports 1110, and low speed interface 1112 connecting to low speed bus 1114 and storage device 1106. Each of components 1102, 1104, 1106, 1108, 1110, and 1112, are interconnected using various busses, and can be mounted on a common motherboard or in other manners as appropriate. Processor 1102 can process instructions for execution within computing device 1100, including instructions stored in memory 1104 or on storage device 1106 to display graphical data for a GUI on an external input/output device, including, e.g., display 1116 coupled to high speed interface 1108. In other implementations, multiple processors and/or multiple busses can be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 1100 can be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
Memory 1104 stores data within computing device 1100. In one implementation, memory 1104 is a volatile memory unit or units. In another implementation, memory 1104 is a non-volatile memory unit or units. Memory 1104 also can be another form of computer-readable medium (e.g., a magnetic or optical disk. Memory 1104 may be non-transitory.)
Storage device 1106 is capable of providing mass storage for computing device 1100. In one implementation, storage device 1106 can be or contain a computer-readable medium (e.g., a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, such as devices in a storage area network or other configurations.) A computer program product can be tangibly embodied in a data carrier. The computer program product also can contain instructions that, when executed, perform one or more methods (e.g., those described above.) The data carrier is a computer- or machine-readable medium, (e.g., memory 1104, storage device 1106, memory on processor 1102, and the like.)
High-speed controller 1108 manages bandwidth-intensive operations for computing device 1100, while low speed controller 1112 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In one implementation, high-speed controller 1708 is coupled to memory 1104, display 1116 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 1110, which can accept various expansion cards (not shown). In the implementation, low-speed controller 1112 is coupled to storage device 1106 and low-speed expansion port 1114. The low-speed expansion port, which can include various communication ports (e.g., USB, Bluetooth, Ethemet, wireless Ethemet), can be coupled to one or more input/output devices, (e.g., a keyboard, a pointing device, a scanner, or a networking device including a switch or router, e.g., through a network adapter.)
Computing device 1100 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as standard server 1120, or multiple times in a group of such servers. It also can be implemented as part of rack server system 1124. In addition or as an alternative, it can be implemented in a personal computer (e.g., laptop computer 1122.) In some examples, components from computing device 1100 can be combined with other components in a mobile device (not shown), e.g., device 1150. Each of such devices can contain one or more of computing device 1100, 1150, and an entire system can be made up of multiple computing devices 1100, 1150 communicating with each other.
Computing device 1150 includes processor 1152, memory 1164, an input/output device (e.g., display 1154, communication interface 1166, and transceiver 1168) among other components. Device 1150 also can be provided with a storage device, (e.g., a microdrive or other device) to provide additional storage. Each of components 1150, 1152, 1164, 1154, 1166, and 1168, are interconnected using various buses, and several of the components can be mounted on a common motherboard or in other manners as appropriate.
Processor 1152 can execute instructions within computing device 1150, including instructions stored in memory 1164. The processor can be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor can provide, for example, for coordination of the other components of device 1150, e.g., control of user interfaces, applications run by device 1150, and wireless communication by device 1150.
Processor 1152 can communicate with a user through control interface 1158 and display interface 1156 coupled to display 1154. Display 1154 can be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. Display interface 1156 can comprise appropriate circuitry for driving display 1154 to present graphical and other data to a user. Control interface 1158 can receive commands from a user and convert them for submission to processor 1152. In addition, external interface 1162 can communicate with processor 1142, so as to enable near area communication of device 1150 with other devices. External interface 1162 can provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces also can be used.
Memory 1164 stores data within computing device 1150. Memory 1164 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 1174 also can be provided and connected to device 1150 through expansion interface 1172, which can include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 1174 can provide extra storage space for device 1150, or also can store applications or other data for device 1150. Specifically, expansion memory 1174 can include instructions to carry out or supplement the processes described above, and can include secure data also. Thus, for example, expansion memory 1174 can be provided as a security module for device 1150, and can be programmed with instructions that permit secure use of device 1150. In addition, secure applications can be provided through the SIMM cards, along with additional data, (e.g., placing identifying data on the SIMM card in a non-hackable manner.)
The memory can include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in a data carrier. The computer program product contains instructions that, when executed, perform one or more methods, e.g., those described above. The data carrier is a computer- or machine-readable medium (e.g., memory 1164, expansion memory 1174, and/or memory on processor 1152), which can be received, for example, over transceiver 1168 or external interface 1162.
Device 1150 can communicate wirelessly through communication interface 1166, which can include digital signal processing circuitry where necessary. Communication interface 1166 can provide for communications under various modes or protocols (e.g., GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others.) Such communication can occur, for example, through radio-frequency transceiver 1168. In addition, short-range communication can occur, e.g., using a Bluetooth®, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 1170 can provide additional navigation- and location-related wireless data to device 1150, which can be used as appropriate by applications running on device 1150. Sensors and modules such as cameras, microphones, compasses, accelerators (for orientation sensing), etc. may be included in the device.
Device 1150 also can communicate audibly using audio codec 1160, which can receive spoken data from a user and convert it to usable digital data. Audio codec 1160 can likewise generate audible sound for a user, (e.g., through a speaker in a handset of device 1150.) Such sound can include sound from voice telephone calls, can include recorded sound (e.g., voice messages, music files, and the like) and also can include sound generated by applications operating on device 1150.
Computing device 1150 can be implemented in a number of different forms, as shown in the figure. For example, it can be implemented as cellular telephone 1180. It also can be implemented as part of smartphone 1182, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor. The programmable processor can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms machine-readable medium and computer-readable medium refer to a computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a device for displaying data to the user (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor), and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be a form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in a form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a backend component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a frontend component (e.g., a client computer having a user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or a combination of such back end, middleware, or frontend components. The components of the system can be interconnected by a form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), and the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In some implementations, the engines described herein can be separated, combined or incorporated into a single or combined engine. The engines depicted in the figures are not intended to limit the systems described here to the software architectures shown in the figures.
A number of embodiments have been described. Nevertheless, it will be understood that various modifications can be made without departing from the spirit and scope of the processes and techniques described herein. In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps can be provided, or steps can be eliminated, from the described flows, and other components can be added to, or removed from, the described systems.
Accordingly, other embodiments are within the scope of the following claims.
Claims
1. A computing device implemented method comprising:
- receiving data representing one or more images of dental information associated with a patient;
- adjusting the data representing the one or more images of dental information into a predefined format, wherein adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information;
- using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information; and
- producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
2. The computing device implemented method of claim 1, further comprising: transferring data representative of the one or more images of dental information associated with the patient to one or more networked computing devices for statistical analysis.
3. The computing device implemented method of claim 1, wherein the machine learning system employs a convolution neural network.
4. The computing device implemented method of claim 1, wherein the machine learning is trained with dental imagery and associated annotations.
5. The computing device implemented method of claim 1, wherein one or more annotations are produced for each of the images of dental information.
6. The computing device implemented method of claim 1, wherein the one or more detected features include a radiolucent lesion or an opaque lesion.
7. The computing device implemented method of claim 1, wherein the produced representation includes a graphical representation that is presentable on a user interface of the computing device.
8. The computing device implemented method of claim 1, wherein the produced representation is used for a diagnosis and treatment plan.
9. The computing device implemented method of claim 8, wherein an alert or recommendation is produced by using the produced representation for the diagnosis and treatment plan.
10. A system comprising:
- a computing device comprising:
- a memory configured to store instructions; and
- a processor to execute the instructions to perform operations comprising:
- receiving data representing one or more images of dental information associated with a patient;
- adjusting the data representing the one or more images of dental information into a predefined format, wherein adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information;
- using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information; and
- producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
11. The system of claim 10, further comprising: transferring data representative of the one or more images of dental information associated with the patient to one or more networked computing devices for statistical analysis.
12. The system of claim 10, wherein the machine learning system employs a convolution neural network.
13. The system of claim 10, wherein the machine learning is trained with dental imagery and associated annotations.
14. The system of claim 10, wherein one or more annotations are produced for each of the images of dental information.
15. The system of claim 10, wherein the one or more detected features include a radiolucent lesion or an opaque lesion.
16. The system of claim 10, wherein the produced representation includes a graphical representation that is presentable on a user interface of the computing device.
17. The system of claim 10, wherein the produced representation is used for a diagnosis and treatment plan.
18. The system of claim 17, wherein an alert or recommendation is produced by using the produced representation for the diagnosis and treatment plan.
19. One or more computer readable media storing instructions that are executable by a processing device, and upon such execution cause the processing device to perform operations comprising:
- receiving data representing one or more images of dental information associated with a patient;
- adjusting the data representing the one or more images of dental information into a predefined format, wherein adjusting the data includes adjusting one or more visual parameters associated with the one or more images of dental information;
- using a machine learning system to determine a confidence score for one or more portions of the one or more images of dental information; and
- producing a representation of the determined confidence scores to identify one or more detected features present in the one or more images of dental information.
20. The computer readable media of claim 19, further comprising: transferring data representative of the one or more images of dental information associated with the patient to one or more networked computing devices for statistical analysis.
21. The computer readable media of claim 19, wherein the machine learning system employs a convolution neural network.
22. The computer readable media of claim 19, wherein the machine learning is trained with dental imagery and associated annotations.
23. The computer readable media of claim 19, wherein one or more annotations are produced for each of the images of dental information.
24. The computer readable media of claim 19, wherein the one or more detected features include a radiolucent lesion or an opaque lesion.
25. The computer readable media of claim 19, wherein the produced representation includes a graphical representation that is presentable on a user interface of the computing device.
26. The computer readable media of claim 19, wherein the produced representation is used for a diagnosis and treatment plan.
27. The computer readable media of claim 27, wherein an alert or recommendation is produced by using the produced representation for the diagnosis and treatment plan.
Type: Application
Filed: Apr 17, 2019
Publication Date: Oct 17, 2019
Inventor: Florian Hillen (Cambridge, MA)
Application Number: 16/387,388