SURGICAL TRAINING APPARATUS, METHODS AND SYSTEMS
Surgical training apparatus, methods and systems which allow surgical trainees to practice surgical skills on anatomical models in a realistic manner with an augmented reality headset and delivery of targeted surgical coursework curriculum correlated to the actions of the trainee as sensed by sensors in or adjacent the model to help the trainee develop proper surgical technique and decision making.
Latest Simulated Inanimate Models, LLC Patents:
The present invention relates to apparatus, methods and systems for surgical training. More particularly, the present invention relates to novel simulated surgical training apparatus, methods and systems which employ anatomy models with strategically placed sensors. The sensor equipped models are used in conjunction with an augmented reality headset and computer software to deliver guided instruction and targeted curricular content to the trainee at ideal times during a simulated surgical training session.
Many surgical trainees currently practice on live patients in the operating room due to insufficient alternatives. This may lead to less than ideal patient outcomes and unnecessarily increased operation times. Studies have shown that longer operating times add to patient risk and increase the cost of care.
Training on surgical phantom models is known, however, there still remains a need for improved surgical training apparatus, methods and systems. Specifically, surgical training requires both visual and verbal cues whilst the student is completing a motor task. Currently, the only means to provide this instruction is through live experienced practitioner training session. Unfortunately due to surgeon time constraints, in person training is only feasible during actual patient cases and therefore has the potential of causing harm or creating excessive costs due to surgical inefficiency.
SUMMARY OF THE INVENTIONSurgical training apparatus, methods and systems which allow surgical trainees to practice surgical skills on anatomical models in a realistic manner with an augmented reality headset and delivery of targeted surgical coursework curriculum correlated to the actions of the trainee as sensed by sensors in or adjacent the model to help the trainee develop proper surgical technique and decision making.
The present invention further addresses the above need by providing in another aspect sensor equipped surgical phantom models with integrated digital curricular content through an augmented reality headset (AR) or other human computer interface (HCl).
The present invention provides in yet another aspect information transfer between surgical phantoms, surgical tools, and computer software that allow a user to perform self-guided training.
In yet a further aspect, the present invention provides a surgical training phantom that emits signals in order to prompt delivery of curriculum content.
Signals can be generated from models using any suitable electronic components such as, for example, transducers, video images and/or strain gauges.
The generation of signals may be initiated in any one of or combination of ways, such as, for example:
-
- a) upon sensing a change in the model such as, for example, the marking of a proposed incision site, the making of an incision, the onset of simulated bleeding, the resection of a simulated tumor, etc.;
- b) upon sensing user movement such as user hand and/or head motions, for example;
- c) sensing the use of a surgical instrument such as body marking pens, suture needles, needle drivers, laparoscopic instruments, suction tips, etc.; and/or
- d) sensing a particular video field of view (“FOV”) within the surgical field or “what the surgeon sees” during the course of the procedure.
Signals from sensors (e.g., transducers, electromagnetic spectrum emissions including visible and non-visible frequencies) are delivered to a computer running a surgical training software program using any desired communication mode such as camera vision, Wi-Fi, Bluetooth, sound, light, wired connection, etc. Machine learning may be employed to parse data, learn from that data and make informed decisions on what it has learned. Deep learning is a type of machine learning in which a model learns to perform classification tasks directly from images, texts, or signals. Deep learning may be implemented using neural network architecture which may be computed in real-time by parallel computers. Machine learning and/or Deep learning may be used to identify, process and classify objects using the signals and images from the AR headset camera and 9 degree of freedom head/camera position tracker and other signal outputs from the simulated organs and/or surgical instruments.
Signals are interpreted by the computer surgical training software program and may cause a state change in the surgical simulation training software.
The software may be programmed to deliver tutorial and “how-to” guides to the trainee that correspond to ongoing progress of the surgical phantom model training session.
In yet another aspect, the invention provides an AR platform that detects a surgical trainee's specific performance during a training procedure on a surgical model and responds to the detected performance by delivering to the trainee corresponding curricular content and/or other information. The AR headset and/or any video or camera feed including, e.g., video from a surgical instrument (laparoscope, endoscope, arthroscope, microscope, etc.), is able to detect one or more “Cues” which may be “Model Cues” and/or “Motion Cues” and/or “Still Image/Video Cues”.
“Model Cues” are discrete elements or physical conditions emanating from the model itself which are detectable by a “Cue Receiver” such as the AR headset. Examples of Model Cues include, but are not limited to, physical markings (e.g., bar codes or other symbols in visible or nonvisible inks), electronic and/or optical sensors and/or any other fiducials embedded within or applied to the outer surface of the surgical model.
“Identification (ID) and/or Motion Cues” (hereinafter “ID-Motion Cues”) include detection of physical presence (static state) and/or motions by the trainee (e.g., eye, head, hand, arm movements) and/or a surgical instrument which are detectable by the AR headset. In this regard, the trainee's body parts and/or the surgical instruments (including auxiliary items which may be used in the surgery such as clips, sponges and gauze, for example) may be provided with applied (e.g., temporary stick-on) sensors and/or other fiducials that allow detection of the presence (ID) and/or motion thereof The motion detection may or may not be made trackable through a computerized navigation system.
“Still Image/Video Cues” include image capture and video feeds from surgical cameras (e.g., laparoscopic, robotic, etc.). The AR headset may also have image capture and video feed functionality which creates the input to the surgical system training software program.
Detection of Model Cues and/or ID-Motion Cues and/or Still Image/Video Cues by the Cue Receiver generates a signal which the surgical training system software (to which the Cue Receiver is wired or wirelessly connected) is programmed to interpret as a specific action and/or anatomical reference point of the model within the context of the particular surgical training module or session.
The Model Cues are strategically positioned in or on the model in a manner which corresponds with the software programming for that particular model. More particularly, the software may be programmed for a particular surgical training module. The software may thus be programmed with an ordered (and, optionally, timed) sequence of surgical acts on the model which are indicative of a successful surgical procedure for that particular surgical training module. The types and placement of the one or more Model Cues in or on the model and/or the ID-Motion Cues and/or the Still Image/Video Cues are correlated to the programmed ordered sequence of surgical acts for the particular surgical session. Should the trainee perform surgical acts on the model that are not in agreement with the expected surgical performance as identified in the software program, the software will detect any such digressions and respond by informing the trainee of the digression from the expected surgical protocol.
Curriculum content and/or other information may be automatically generated and delivered to the trainee at the time of the detected digression and/or at the conclusion of the training module.
Besides being able to detect a change in the programmed ordered sequence and/or timing of Model Cue detections, the Model Cues and/or Motion Cues and/or Image Capture/Video Cues may provide signals to the software indicative of a surgical act being performed on the model that is not according to protocol for that training module or not meeting the surgical performance standard for that act (e.g., marking the wrong site for an incision on the model with the body marking pen, poorly executing an incision, resection, or improper placement of a surgical instrument or auxiliary item (such as leaving a sponge in the model).
The system may thus detect the current surgical training state based on a detected Model Cue and/or ID-Motion Cue and/or Image Capture/Video Cues and respond by causing the corresponding curricular content and/or other information to be displayed or otherwise provided to the surgical trainee. The software may be programmed with direct visual detection algorithms including machine learning, deep learning, and/or reinforcement learning to develop the various Cue detection functions.
In another aspect, the invention provides computer software that is programmed to deliver curricular content timed appropriately to the trainee's progress on surgical training models. The software is based on an algorithm decision tree that selects appropriate content for any given surgical training session or scenario. The software structure allows the system to time the delivery of content to the trainee in any desired manner including immediately after a detected input, if desired. The system may be programmed to include optional playback by the trainee at any interval in the training session.
In another aspect, the invention provides computer software that summates the activities detected by the trainee and provides a performance score for individual steps taken by the trainee and/or the entire procedure of the surgical training module. The output from the Cues described above may be summated and interpreted by the machine learning based on performance differences between novices and experts, for example. The software may also be programmed to calculate a performance score or provide additional instruction to the trainee in order to improve future performance.
Additional objects, advantages and novel aspects of the present invention will be set forth in part in the description which follows, and will in part become apparent to those in the practice of the invention, when considered with the attached figures.
The above-mentioned and other features and advantages of this invention, and the manner of attaining them, will become apparent and be better understood by reference to the following description of the invention in conjunction with the accompanying drawing, wherein:
The surgical training system in its most basic form includes a surgical model, a computer (having the usual computer components including for example, but not necessarily limited to, a processor, memory, input-output interface, graphic user interface (GUI), etc.), one or more “Cue Receivers” for receiving data inputs in the form of Model Cues and/or ID-Motion Cues and/or Still Picture/Video Cues, and surgical training system software (“STSS”) running on the computer processor. The surgical model may be any organ and/or other anatomical component found in any animal or human type. The Cue Receiver may include any one or combination of AR headset, microphone, digital camera, digital video, electronic sensors, real-time clock, touchscreen, computer keyboard, joystick, mouse, trackball, image scanner, graphics tablet, overlay keyboard, for example. More than one type of Cue Receiver may be provided on the same device (e.g., AR headset). The Cue Receiver relays the received Cue to the STSS which is programmed with one more surgical training sessions or modules. The STSS is programmed to receive and respond to received Cues generating appropriate output and teaching a surgical trainee to perform a surgical procedure on the surgical model.
Referring to
- 1080 p DLP Projected Display Waveguide with See through Optics
- WiFi & Bluetooth Connectivity
- 8 Megapixel Camera
- Quad Core ARM CPU
- Right Eye Monocular
- Haptic Feedback
- Voice Control
- Android 5 OS
- Noise Cancelling Microphone
- On Board Video Recording Media
The AR headset 12 may be wired or wirelessly connected to a computer having a graphic user interface (“GUI”) 17 which may be in the form of a smart phone running the STSS 19 as a downloaded software application (“app”), for example. The STSS may also be hosted remotely in the “cloud” 21 and provided to a trainee as Software as a Service (SaaS). Any other computer types may be used such as tablets, laptops, desk tops, virtual desk top, etc., whereon the STSS may be installed or accessed as a SaaS. The STSS 19 may be programmed to present to the trainee a login screen on device 17, monitor 13 and/or AR headset 12 wherein the trainee may have a password protected data file which will store the trainee's surgical training session data for later retrieval and/or playback. The STSS may connect to other servers and/or networks such as at 21b whereby the trainee's STSS file may be connected to the trainee's personal student data files hosted on, for example, the trainee's medical school server. As such, the trainee's time spent on simulated surgical training may be logged for the trainee's class credit or other purposes.
The STSS 19 may be programmed to include one or more of different surgical training sessions for selection by the trainee which may be made by voice command or via the GUI on device 17, for example. The surgical training model 14 may include a bar code 14a or the like which may be scanned by a separate bar code scanner 23a connected to the computer 17 through a wired or wireless connection or a scanner 23b integral to the computer 17 (e.g., a scanner app of a smart phone 17) or the STSS app running thereon which is operable to read the bar code on the model 14 and thereby identify the surgical model anatomy type the trainee wishes to train on. Each surgical model type programmed into the STSS may be associated with and displays to the trainee one or more surgical training sessions which are appropriate to the model type. For example, the model type may be identified as a kidney and the matching surgical training session options may be presented in a list to the trainee (e.g., on media interface 17, monitor 13 and/or AR headset 12) as, e.g., (1) tumor resection; (2) kidney stone removal; (3) vessel rupture repair, etc. The trainee may select (input) the desired surgical training session (e.g., by manual input using a graphic user interface (GUI) and touchscreen, keyboard, or mouse, and/or by visual (e.g., eye tracking) and/or voice command) and the STSS is programmed to respond to the input by launching the trainee's chosen surgical training session of the STSS program. Depending on the training session chosen, certain sensor features of the model may be automatically activated by the STSS (but not necessarily triggered) as discussed further below.
As mentioned above, the computer input by a Cue Receiver such as the AR headset 12 during a surgical training session may include Cues in the form of any one or combination of Model Cues and/or ID-Motion Cues and/or Still Picture/Video Cues. The various Cue inputs are analyzed by the STSS as they are received via the Cue Receiver with the STSS responding by generating output in the form of corresponding curricular content and/or other useful information (e.g., alerts of a surgical emergency being detected, an unsafe procedure being performed, amended or supplanted protocol to follow due to a deviation from protocol, etc.). The STSS output may be provided in any one or a combination of desired formats including audio output and/or display on the AR headset 12 and/or on a separate video monitor 13 and/or speaker 15 in the operating training room. The generated audio output may be provided to the trainee in the form of alarms and/or verbal instructions, for example. As such, in this embodiment the trainee receives the generated output for their consideration during the training (real time) so that they may understand whether their performance is correct, in general need of improvement and/or require they implement a change to the surgical protocol itself to rectify or address any identified issues with their performance. The form and content of the generated output may be programmed into the STSS for each specific surgical training session. The content may be in the form of educational curriculum stored in a database 11.
Referring to
An example of a surgical training session is seen in
As stated above, if it is desired to show the trainee the proper port location, the STSS may cause either an image of an abdomen with the port in the trainee's AR headset, or overlay an image of the port location onto the surgical model. The AR headset recognizes the correct location of the port location 18b by any desired Cue such as a surface marking such as by scanning barcode 22 or other fiducial such as navel 20. Again, this may be as subtle as a slight color change in the model or use of applied inks or other colorants outside the human visible spectrum to prevent trainees from relying too heavily on such markings which may not be present in actual surgery.
After second port 18b is correctly placed as detected by the appropriate Cues which relay their received data as input to the STSS (see discussion above), the trainee is directed by the STSS via AR headset 12 to place a third port 18c at location 24 which may include a Cue in any desired form including, for example, the form of a bar code on the model as described above or a sensor which may or may not be embedded in the model (and thus not visible to the human eye) that is detectible by the AR headset 12. The sensor (e.g., a pressure sensor) may, upon activation, generate a signal which is detected by the AR headset 12 which informs the STSS that third port 18c has been properly placed which is programmed to respond by generating guidance to the trainee (e.g., by issuing text and/or verbal instructions in AR headset 12 and/or monitor 13 and/or speaker 15) to place fourth port at 18d. After a laparoscopic training procedure is finished as detected by a Cue and relayed to the STSS, the trainee may be instructed by the STSS to remove a specimen from the model (e.g., simulated tumor) and instruct the trainee to create an incision at 26.
Referring now to
The trainee proceeds with the surgical simulation session by cutting an incision 34 in the model surface 32 using scalpel 30. The AR headset and STSS may be programmed to calculate the length and/or depth of the incision 34 based on Cues such as visual appearance and/or fiducial references. For example, the Cue may be provided in the form of a 1 cm square fiducial detected on the model “skin” surface, and wherein the STSS may be programmed to calculate distance based on the visual detection of the incision relative to the fiducial. Alternatively, Model Cues in the form of electronic sensors could be spaced a certain distance apart and the number of sensors detected in linear or curvilinear sequence can be used for the STSS to calculate distance (length of incision).
Depth (the distance from model surface into body of the model) can be provided by a Video Cue and/or Motion Cue based on the amount of scalpel that has extended beneath the upper surface of the model or “disappeared” into the model. The scalpel blade in this case is the visual cue and is detected by the AR headset 12 which can detect and relay to the STSS what percentage of the blade has disappeared into the model. The STSS can be programmed to use this data to calculate incision depth and provide appropriate instruction to the trainee if it calculates that the incision 34 has not been correctly executed, e.g., it does not meet the minimum programmed thresholds for incision depth.
In
The STSS programming may be made such that it selects information to provide the trainee based on which and/or how many and/or the order of sensors which are activated during a specific training session or any segment thereof In the example shown in
If there are multiple organs present in the model, sensors within an adjacent organ may be provided to inform the trainee if he/she has damaged or wrongly entered the surrounding space of an adjacent organ. For example, as discussed above with reference to
The STSS programming may instruct the trainee to continue the training session at any time during the session. For example, the programming may provide the trainee instructions to use suction tube 52 and forceps 54 to retract tumor 42. As the trainee retracts the tumor with the use of suction tube 52 and/or forceps 54, a pressure sensor 56 embedded in tumor 42 may be pressed upon and thus activated. The STSS programming may include threshold pressure values indicative of correct retraction pressure. If insufficient retraction occurs based on low signal from tumor pressure sensor 56, the STSS may provide an alert to the trainee, e.g., to use suction tube 52 to perform more retraction.
This tumor resection training session may be programmed in the STSS to require ligation of blood vessel 44 as part of the procedure for tumor removal. The STSS programming will recognize ligation of vessel when sensor 58 senses a threshold pressure. If a suture is placed around vessel 44 but is not sufficiently tight, the STSS programming can instruct the trainee to redo or tighten the suture to prevent further bleeding, for example.
Simulated kidney tumor 64 and its border may be identified by the STSS programming by sensed color difference between the tumor 64 and the kidney model substrate 60 surrounding the tumor 64. The edge of the kidney model 60 (which in a real kidney is typically covered by fat) has unique markings that are detected and inform the STSS programming that this portion of the kidney has been exposed. The trainee is required to evaluate the entire kidney during surgery to ensure that there are no missed lesions or other abnormalities. The STSS will instruct the trainee to “uncover” or “expose” the kidney until marking 66 is detected by the STSS programming.
During resection of the tumor 64, the trainee must identify the renal artery 68a and renal vein 68b. The STSS programming provides instruction to the trainee to place a temporary clip 70 on only artery 68a. If incorrectly placed (as detected by any one or more of Model Cues and/or ID-Motion Cues and/or Still Image/Video Cues), the STSS programming may provide instructions to the trainee that the clip has been improperly placed and/or instruct the trainee to move clip 70 to the correct position. For example, should the trainee place the clip on the vein 68b, this would be detected (e.g., by a sensor placed in or on vein 68b or by visual input through a camera) and the STSS programming would identify it as a medical emergency as placement on the vein would cause the kidney to have blood flow in but not out potentially causing the kidney to burst. Furthermore, the correct clip position is perpendicular to the vessel and the tips of the clip should cross the edge of the vessel. Visual inspection (e.g., color difference between clip and vessel) may allow the STSS to assess any overlap and relative positioning of the clip relative to the artery.
Referring to
In
While the apparatus, methods and systems of the invention have been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as described.
Claims
1. A surgical training system, comprising:
- a) a model of an anatomical part of a human or animal;
- b) one or more sensors attached to said model, said one or more sensors operable to emit a signal in response to receiving an activation input;
- c) an augmented reality headset having one or more electronic and/or optical input and output channels adapted to receive electronic signals from said one or more sensors;
- d) a computer processor operable to receive signals from said augmented reality headset and/or said one or more sensors;
- e) a computer database having surgical training curriculum having one or more individual surgical subject matter components stored therein; and
- f) a software program running on said computer processor and connected to said database, said software program operable to correlate said one or more sensor signals to said one or more individual surgical subject matter components, said software program being further operable to provide as an output the individual surgical subject matter component which is correlated to a received signal.
2. The surgical training system of claim 1, wherein a machine learning model receives visual and/or electronic signal cues from one or more sensors attached to the model or the augmented reality headset.
3. The surgical training system of claim 2, wherein the software program tracks the state of progress of steps of a surgical procedure from beginning to end.
4. The surgical training system of claim 3, wherein each step of the surgical procedure has one or more machine learning models having two or more machine learning classes which determine transition to a next step of the surgical procedure.
5. The surgical training system of claim 4, wherein the one or more machine learning models predict a class which corresponds to a correct action for a particular step of the surgical procedure, and an affirmative instruction is displayed to the student through a projected display waveguide with see through optics of the augmented reality headset for the particular step.
6. The surgical training system of claim 5, wherein the one or more machine learning models predict a class which corresponds to an incorrect action for a particular step of the surgical procedure, and a corrective instruction is displayed to the student through the projected display waveguide with see through optics of the augmented reality headset for the particular step.
7. The surgical training system of claim 6, wherein at the end of the surgical procedure, the software program is operable to tally all occurrences of correct actions and incorrect actions in order to produce a quantitative score which tracks the state of progress of each step of the surgical procedure.
8. The surgical training system of claim 7, wherein at the end of the surgical procedure, a recording of the surgical procedure is saved so that it may be reviewed by the student and/or instructor for instructional or proficiency scoring purposes.
9. A procedure training system comprising:
- (a) a physical model associated with performance of one or more procedures by a trainee, each of the one or more procedures having a finite number of defined steps, each step having one or more cues associated therewith;
- (b) one or more cue receivers operable to receive one or more cues during performance of the one or more procedures associated with the physical model, and generate a signal in response to receiving the one or more cues;
- (c) a computer processor operable to receive signals from said one or more cue receivers;
- (d) a computer database having training curriculum having one or more individual subject matter components stored therein for each of the one or more procedures; and
- (e) a software program running on said computer processor and connected to said database, said software program operable to correlate said received signals to said one or more individual subject matter components for a particular procedure, and provide as an output the individual subject matter component which is correlated to a received signal,
- wherein the one or more cue receivers includes an augmented reality (AR) headset having one or more electronic and/or optical input and output channels adapted to send and receive said signals corresponding to said received cues.
10. The procedure training system of claim 9, wherein the one or more cues include any one or combination of:
- model cues for detecting discrete elements or conditions associated with the physical model itself,
- identification and/or motion (ID-motion) cues for detecting physical presence or motions by the trainee or an instrument used by the trainee, and/or
- still picture/video cues including image capture and video feeds.
11. The procedure training system of claim 10, wherein the one or more cue receivers further include any one or combination of:
- one or more image scanners,
- one or more cameras operable to capture images and/or videos, and
- one or more sensors in, on, near, or associated with the physical model.
12. The procedure training system of claim 11, wherein the one or more cue receivers generate said signals based on any one or combination of a change in the physical model, a movement of the trainee, use of an instrument by the trainee, and/or a particular field of view during the course of the procedure.
13. The procedure training system of claim 10, wherein the software program is programmed with one or more machine learning (ML) models operable to identify, process, and classify the received signals for each step of the one or more procedures,
- wherein the one or more ML models have two or more classes which determine a transition to a next step from a current step of a particular procedure, and
- wherein the one or more ML models enable the software program to determine and communicate through the AR headset or a computer if actions performed by the trainee for each step of the particular procedure are correct or incorrect.
14. The procedure training system of claim 13, wherein the one or more ML models are operable to detect specific performance of the trainee during the course of the procedure associated with the physical model based on the received signals associated with one or more of the model cues, the ID-motion cues and/or the still picture/video cues, and the software program is operable to output corresponding training curriculum, alerts, and/or related instructions to the trainee based on the specific performance of the trainee.
15. The procedure training system of claim 14, wherein the software program is operable to interpret the received signals as a specific action or reference point of the physical model within the context of each step of the particular procedure using the one or more ML models, and cause a state change during the performance of the procedure by the trainee that will advance the procedure from the current step to the next step.
16. The procedure training system of claim 15, wherein the software program is programmed to deliver tutorials and guidance to the trainee that correspond to ongoing progress of the steps of the procedure.
17. The procedure training system of claim 15, wherein the software program is programmed with an ordered sequence of actions on the physical model which are indicative of a successful procedure, and wherein the model cues and/or the ID-motion cues and/or the still picture/video cues are correlated to the programmed ordered sequence of actions for the particular procedure.
18. The procedure training system of claim 17, wherein the software program is operable to identify whether the trainee performs any actions on the physical model that are not in agreement with an expected protocol or performance standard as identified in the software program based on the one or more ML models, and inform the trainee of a detected digression from the expected protocol or performance standard.
19. The procedure training system of claim 18, wherein the software program is operable to deliver corresponding training curriculum, an alert, and/or corrective instructions to the trainee at the time of the detected digression from the expected protocol or performance standard during the performance of the procedure and/or at the conclusion of the particular procedure.
20. The procedure training system of claim 15, wherein the software program is operable to analyze the received signals from the one or more cue receivers using the one or more ML models corresponding to the current step of the particular procedure, and determine whether the received signals indicate that the trainee performed the current step of the procedure properly.
21. The procedure training system of claim 20, wherein the software program is operable to:
- output corresponding training curriculum and/or affirmative instructions associated with the next step in the particular procedure in response to the received signals indicating that the trainee performed the current step of the procedure properly, or
- output corresponding training curriculum and/or corrective instructions associated with the current step in the particular procedure in response to the received signals indicating that the trainee did not perform the current step properly.
22. The procedure training system of claim 13, wherein the software program is operable to generate and output a performance score or figure of merit for individual steps taken by the trainee and/or for the entire procedure based on the number of correct actions and incorrect actions detected using the one or more ML models.
23. The procedure training system of claim 13, wherein the software program is operable to log, timestamp, and store the detected signals in the database based on recognized classifications using the one or more ML models, and provide the ability to playback recordings of the procedures for training debrief purposes after completion of the particular procedure.
24. The procedure training system of claim 10, wherein the software program is programmed with direct visual detection algorithms including machine learning, deep learning, and/or reinforcement learning to develop cue detection functions.
25. The procedure training system of claim 10, wherein training of the software program is an offline process performed by taking the cues of interest in identifying quality of performance of each step of the procedure by using machine learning software for image process training.
26. The procedure training system of claim 10, wherein the software program is trained for machine and deep learning using neural networks for detection of user technique during performance of the steps of the procedure,
- wherein a neural network classifies patterns of images and/or signals based on a learned features database defining two or more classes for each step of the procedure, and outputs a performance score or figure of merit for each step of the procedure based on the classified patterns.
27. The procedure training system of claim 10, wherein the software program is operable to detect a current step of the particular procedure based on one or more detected model cues and/or ID-motion cues and/or still picture/video cues.
28. The procedure training system of claim 27, wherein training of the software program and/or machine learning model is an offline process performed by further segregating recorded detected model cues and/or ID-motion cues and/or still picture/video cues for a specific step of the particular procedure into machine learning classes utilizing unsupervised learning, by utilizing machine learning models that can accurately predict classes with high confidence which were trained by supervised learning as a method of segregating recorded information into machine learning model classes.
29. The procedure training system of claim 28, wherein machine learning models are created for each step of the procedure with a greater level of confidence than the machine learning models obtained by supervised learning, by utilizing unsupervised learning and utilizing class data obtained during the offline process.
Type: Application
Filed: Jan 24, 2022
Publication Date: May 12, 2022
Applicant: Simulated Inanimate Models, LLC (Pittsford, NY)
Inventors: Jonathan Stone (Rochester, NY), Steven Griffith (Honeoye Falls, NY), Nelson N. Stone (Vail, CO)
Application Number: 17/582,813