Abstract: A method for using a reading teaching aid assembly includes aligning a vowel of a vowel card with a starting cell of a starting block, the starting cell defining a starting consonant; pronouncing the starting consonant and the vowel together; sliding the vowel card along a vowel track, the starting block and an ending block extending along the vowel track; aligning the vowel with an ending cell of the ending block, the ending cell defining an ending consonant; and pronouncing the ending consonant.
Abstract: A method for providing task load-optimized computer-generated training experiences to a user of a training system that includes: a display, a training simulator, a prediction program (ML1), and a training optimization program (ML2). In response to receiving a predicted optimal task load, ML2 provides a first training experience recommendation related to the training content and/or training conditions that, if utilized in providing a training experience to the user, is predicted to result in the predicted actual task load of the user equaling the predicted optimal task load. In response to receiving biometric information or performance metric information, ML1 determines the predicted actual task load. If the predicted actual task load does not match the predicted optimal task load, ML2 provides a second training experience recommendation and a second training experience is provided where at least one of the training content or the training conditions is changed.
Abstract: An intersection blind-guiding system includes: a blind-guiding terminal, a plurality of blind road sensors, and a processor configured to be coupled to the blind road sensors. The blind-guiding terminal includes a blind-guiding terminal sensor configured to transmit a sensing signal. The blind road sensors include at least one first blind road sensor and at least one second blind road sensor. The blind road sensors are configured to separately receive the sensing signal transmitted by the blind-guiding terminal sensor, and upload corresponding sensing information. The processor is configured to receive the sensing information uploaded by the blind road sensors, locate a current position of a blind person carrying the blind-guiding terminal, determine a direction in which the blind person previously traveled, determine geographic distribution information of the current position of the blind person, and send a command carrying the geographical distribution information.
Abstract: A method for assessing learning comprehension regarding a topic includes modifying a fundamental illustrative model to illustrate a first set of assessment assets of a first learning object of learning objects to produce a first assessment illustrative model. The fundamental illustrative model is based on illustrative assets of a lesson that includes the learning objects. The method further includes obtaining a first assessment response for the first assessment illustrative model. When the first assessment response is favorable, the method further includes modifying the fundamental illustrative model to illustrate a second set of assessment assets of a second learning object of the learning objects to produce a second assessment illustrative model and obtaining a second assessment response for the second assessment illustrative model.
Type:
Grant
Filed:
March 18, 2020
Date of Patent:
December 13, 2022
Assignee:
Enduvo, Inc.
Inventors:
Matthew Bramlet, Justin Douglas Drawz, Steven J. Garrou, Joseph Thomas Tieu, Gary W. Grube
Abstract: A method for creating an assessment within a multi-disciplined learning tool regarding a topic includes deriving a first set of knowledge test-points for a first learning object regarding the topic based on a first set of knowledge bullet-points, an illustrative asset, and a first descriptive asset of the first learning object. The method further includes deriving a second set of knowledge test-points for a second learning object regarding the topic based on a second set of knowledge bullet-points, the illustrative asset, and a second descriptive asset of the second learning object. The method further includes generating a knowledge assessment asset based on the first and second knowledge test-points.
Type:
Grant
Filed:
October 10, 2019
Date of Patent:
December 13, 2022
Assignee:
Enduvo, Inc.
Inventors:
Matthew Bramlet, Justin Douglas Drawz, Steven J. Garrou, Joseph Thomas Tieu, Gary W. Grube
Abstract: A system to deliver to a user artificial flavor sensations equivalent to a selected desired real flavor having a database in which is stored data of a number of real flavors broken down into components selected from taste, smell, feel and appearance, A head mounted display device is provided to deliver visual and audio cues of a flavor from the database and a bite sensation component is mounted in the mouth of a wearer of the head mounted display controlling the delivery of taste, feel, and smell components of the flavor to the user.
Abstract: A method for determining a state of mind of a user may include receiving one or more strings of characters composed by the user, and determining, by a processing device, the state of mind of the user by processing the one or more strings of characters. The processing of the one or more strings of characters may include identifying similarities of the one or more strings of characters with other strings of characters indicative of the state of mind. The method may also include determining, based on the one or more strings of characters, a severity of the state of mind of the user.
Abstract: Systems are disclosed relating to a mobile device mounted to a welding helmet such that a wearer of the welding helmet can see a display of the mobile device when wearing the welding helmet. In some examples, the mobile device is mounted such that a camera of the mobile device is unobscured and positioned at approximately eye level, facing the same way the wearer's eyes are facing. In some examples, the simulated training environment may be presented to the user via the display screen of the mobile device, using images captured by the camera of the mobile device, when the mobile device is so mounted to the welding helmet.
Type:
Grant
Filed:
November 25, 2019
Date of Patent:
December 6, 2022
Assignees:
Illinois Tool Works Inc., Seabery North America Inc.
Inventors:
Pedro Gerardo Marquinez Torrecilla, William Joshua Becker, Justin Monroe Blount, Mitchell James Muske, Jessica Marie Marhefke, Pavel Gunia
Abstract: A real-time virtual reality welding system including a programmable processor-based subsystem, a spatial tracker operatively connected to the programmable processor-based subsystem, at least one mock welding tool capable of being spatially tracked by the spatial tracker, and at least one display device operatively connected to the programmable processor-based subsystem. The system is capable of simulating, in virtual reality space, a weld puddle having real-time molten metal fluidity and heat dissipation characteristics. The system is further capable of importing data into the virtual reality welding system and analyzing the data to characterize a student welder's progress and to provide training.
Type:
Grant
Filed:
May 5, 2021
Date of Patent:
December 6, 2022
Assignee:
Lincoln Global, Inc.
Inventors:
David Anthony Zboray, Matthew Bennett, Matthew Wayne Wallace, Jeremiah Hennessey, Yvette Christine Dudac, Zachary Steven Lenker, Andrew Lundell, Paul Dana, Eric A. Preisz
Abstract: A system and method for characterizing, selecting, ordering and rendering discrete elements of digitized video content to teach communications and pedagogic skills. Each of a plurality of observed or computer-generated instances of modeling of distinguishable teaching skills are recorded as digitized assets. Microskills are identified and deconstructed in the abstract from one or more of the visual and audible recordings of teaching skills modeling moments. Identifiers of microskills are associated by a human editor with recorded modeling instances and/or portions thereof. Modeling presentations are dynamically generated by a user asserting one or more microskill identifiers and a network-enabled selection, ordering and rendering of portions of modeling instances that are associated with the asserted microskill identifiers.
Type:
Grant
Filed:
April 7, 2021
Date of Patent:
November 29, 2022
Inventors:
Regina Marie Firpo-Triplett, Tamara Jean Kuhn
Abstract: Systems and methods for providing driver training in a virtual reality environment are disclosed. According to some aspects, an appropriate virtual reality driving simulation may be determined based on one or more input parameters provided by a user. The virtual reality driving simulation may include: (i) an instructional lesson, to be rendered in virtual reality, for teaching driving-related rules and/or skills to a user, and (ii) a driving scenario, to be rendered in virtual reality, for the user to practice the driving-related rules and/or skills taught by the instructional lesson. While the virtual reality driving simulation is rendered, user performance data may be recorded. Based on an analysis of the user performance data, a driving competency score and/or user feedback may be determined.
Type:
Grant
Filed:
March 9, 2021
Date of Patent:
November 15, 2022
Assignee:
STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
Inventors:
Anna Marie Madison, Yixiang Zeng, Jeffrey Yin
Abstract: A method for creating personalized lesson recommendations for a user is provided. In some embodiments, the method includes assigning a skill profile to the user, where the skill profile includes a skill vector associated with one or more fine-grained skills of the user. The method also includes assigning a difficulty profile to a task to be practiced by the user, where the difficulty profile includes a task difficulty vector associated with one or more fine-grained skills of the task. Further, the method include prioritizing the task within a list of recommended tasks offered to the user based on a comparison between the skill profile of the user and the difficulty profile of the task.
Abstract: Innovative instrument holders used for minimally invasive surgical simulation and training are disclosed when used in conjunction with a smartphone, tablet or mini-tablet computer enabling visualization of the surgical field. The surgical field used with these instrument holders can include animal models, physical models, and both virtual and augmented reality models. Some embodiments can be used with applications that can be downloaded to the smartphone, tablet or mini-tablet computer in order to enhance specific hand-eye coordination tasks. Some embodiments can be used as an adjunct surgical trainer for endoscopy, colonoscopy, and other minimally invasive gastrointestinal and gynecological surgical procedures using surgical instruments that incorporate fiber optics.
Abstract: Techniques are disclosed for capturing and monitoring object motion in an AR environment. A first movement sequence may be received. The movement sequence may be an assigned movement routine. Image data of at least one target may be captured and the image data can be augmented with a training object to generate augmented training data. The training object can be caused to perform the first movement sequence. The motion of the at least one target can be recorded relative to the training object using the image data and one or more sensors deployed to the at least one target. A progression of therapeutic routines may be shown to the user depending on how the user's therapy is progressing. A second movement sequence can be received and the training object can be caused to perform the second movement sequence.
Abstract: A device for providing a tactile feedback includes an imaging device configured to capture an image of a face of a subject, a tactile feedback device, and a controller communicatively coupled to the imaging device and the tactile feedback device. The controller comprising at least one processor and at least one memory storing computer readable and executable instructions that, when executed by the processor, causes the controller to: process the image, determine a type of a facial expression based on the processed image, determine a level of a facial expression of the type based on the processed image, determine a tactile feedback intensity of the tactile feedback device based on the level of the facial expression, and control the tactile feedback device to provide a tactile feedback having the tactile feedback intensity.
Type:
Grant
Filed:
February 1, 2017
Date of Patent:
October 25, 2022
Assignee:
Toyota Motor Engineering & Manufacturing North America, Inc.
Abstract: An example weld training system includes: a weld training device configured to perform a simulated welding procedure on a simulated weld joint; a work surface comprising the simulated weld joint; a sensing device configured to track weld training device location information during the simulated welding procedure; a visual interface configured to display results of the simulated welding procedure based on the weld training device location information; and an enclosure comprising an interior volume configured to house within its interior the visual interface, the work surface, and the sensing device.
Abstract: A simulated eyeball for training in ophthalmic surgery includes a simulated sclera region that constitutes a simulated sclera, and a conductor layer that is formed on a side of the simulated sclera region that is on an interior of the simulated eyeball, the conductor layer forming a simulated choroid region.
Abstract: Welding training systems and methods utilize a welding simulator wherein multiple users can interact simultaneously with distinct simulated environments or within the same simulated environment.
Abstract: A method to determine the location of an instrument within a patient can be based upon the measuring of a characteristic within the patient and matching the currently measured characteristic with a previously measured characteristic. If the measurements of a characteristic matches in an appropriate or selected manner then a location match can be determined. The characteristic can be any appropriate characteristic and measured in any appropriate way.
Abstract: Interactive software applications designed to assess a combination of behavioral neuro-physiological characteristics of a user to determine an effect a substance is currently having on the user. In some examples the effect of the substance may be assessed to identify a cognitive impairment caused by a substance and determine the type of substance(s) likely causing the impairment. In some examples the effect of the substance may be assessed to determine a recommended dosage and/or a standard impairing dose threshold for a particular substance.
Type:
Grant
Filed:
September 16, 2020
Date of Patent:
October 11, 2022
Assignee:
Driveability VT, LLC
Inventors:
Christopher Lewis, Andy Kaplan, Ari Kirshenbaum, Gershon Parent, Jevan Fox