Spelling, Phonics, Word Recognition, Or Sentence Formation Patents (Class 434/167)
  • Patent number: 12251641
    Abstract: An information provision device in an information provision system includes, as its functions: a score control circuit that increases or decreases a provision timing score on the occurrence of a predetermined action; a timing determination circuit that determines the coming of the time to provide quiz information if the provision timing score exceeds a predetermined threshold; a priority control circuit that determines or changes, based on the situation of, e.g., a sport game on each occurrence of an action, priority associated with text information from which quiz information is generated; a quiz information generation circuit that generates appropriate quiz information based on the situation of, e.g., the sport game at the time determined; and a quiz information provision circuit that provides the generated quiz information to a user.
    Type: Grant
    Filed: July 28, 2023
    Date of Patent: March 18, 2025
    Assignee: JUNGLE X CORP.
    Inventor: Fumitada Naoe
  • Patent number: 12254781
    Abstract: An educational multiplayer card game, system, and method of play that combines strategy and randomization is provided. More particularly, the present disclosure relates to an educational card game including cards having synonym and antonym attributes, wherein cards can be chained together according to a synonym attribute and cards can perform contrast strikes according to an antonym attribute. Further, the educational card game can include more than one language on each card.
    Type: Grant
    Filed: January 14, 2022
    Date of Patent: March 18, 2025
    Inventor: Michael Wayne Riffle
  • Patent number: 12248759
    Abstract: There are provided a method and system for automatic augmentation of gloss-based sign language translation data. A system for automatic augmentation of sign language translation training data according to an embodiment includes: a database configured to store a sequence of sign language glosses and a sequence of spoken-language words in pairs; and an augmentation module configured to augment the pairs stored in the database. Accordingly, gloss-based training data of high quality may be acquired by performing automatic augmentation for gloss-based training data for sign language translation in an efficient method in terms of time and economic aspects, and eventually, accuracy of translation between sign language glosses and sentences may be enhanced.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: March 11, 2025
    Assignee: Korea Electronics Technology Institute
    Inventors: Jin Yea Jang, Han Mu Park, Yoon Young Jeong, Sa Im Shin
  • Patent number: 12197877
    Abstract: Disclosed are various embodiments for a visual language processing modeling framework via an attention-on-attention mechanism, which may be employed for object identification, classification, and the like. In association with a display of a user interface, an eye tracking via images captured by an imaging device is performed to programmatically detect eye movement and fixation relative to sub-regions of the user interface. Eye fixations on at least one of the sub-regions from the eye tracking. Visual cues are extracted from the user interface based at least in part on the eye fixations, the visual cues being in a sequence of identification. A visual language sentence is generated based at least in part on the visual cues as extracted. The visual language sentence of the visual cues in the sequence of identification is correlated to at least one decision using a visual language understanding routine.
    Type: Grant
    Filed: November 2, 2022
    Date of Patent: January 14, 2025
    Assignee: VIRGINIA TECH INTELLECTUAL PROPERTIES, INC.
    Inventors: Ran Jin, Xiaoyu Chen
  • Patent number: 12159553
    Abstract: A cube of Chinese characters is provided, which includes a 3D dictionary. Initials are vertically arranged and finals are horizontally arranged on a front of the 3D dictionary. A character cell is provided at an intersection of each initial and each final. A character rod is provided in each character cell. A phonetic combination corresponding the initial and final is provided on a front of the character rod, forming a list of onomatopoeic or homophonous characters. A number of sound cells are provided on the front of the 3D dictionary, a sound rod is provided in each sound cell. A holding groove is provided on the front of the 3D dictionary, a printer is placed in the holding groove. The movable-type monosyllabic Chinese characters are used to form the 3D dictionary by changing an existing linear arrangement in an alphabetical order to vertical initial and horizontal final arrangement.
    Type: Grant
    Filed: September 1, 2020
    Date of Patent: December 3, 2024
    Inventor: Jianhua Yin
  • Patent number: 12159026
    Abstract: A system includes a development tool for adding electronically-driven effects to a dynamic user-influenced media experience. The development tool is adapted to receive first user input and second user input. The first user input defines an audio trigger corresponding to one or more words or phrases appearing in a textual transcript of an audio content stream that includes audio data to be played as part of the dynamic user-influenced media experience. The second user input defines an event that is to be executed in temporal association with an audible occurrence of the audio trigger. The development tool generates metadata based on the first user input and the second user input, and an application engine interprets the metadata as an instruction to selectively trigger execution of the event in association with the audible occurrence of the audio trigger while presenting the dynamic user-influenced media experience.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: December 3, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: James Aaron Crowder
  • Patent number: 12106769
    Abstract: Provided are a program and a sound output device for maintaining the interest of the user. A sound output device 1 which outputs prescribed musical scales comprises: a level information acquisition unit 13 that acquires, as level information, a user's sense of pitch; a sound output unit 16 that outputs sound of which the pitch has been changed under prescribed conditions based on the acquired level information; a response information acquisition unit 19 that acquires, as response information, a response to the change in a musical scale input by the user on the basis of the output musical scale; a correctness determination unit 20 that determines the correctness of the acquired response information; and a level determination unit 21 that determines the level of the user's sense of pitch on the basis of the determination result.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: October 1, 2024
    Assignee: NEUMO, INC.
    Inventors: Ryosei Wakabayashi, Masashi Yamada, Sho Hoshino
  • Patent number: 12106844
    Abstract: A mnemonic device/system and a method of use thereof, the mnemonic device includes a base, a set of modules, and a brain head. Each module has a trait inscribed on it and the modules of selected traits are assembled for learning and character building. The mnemonic device provides for pragmatic memory encoding, retention and retrieval, new information to be remembered and encoded by the learner while simultaneously re-coding information previously coded by the teacher. The mnemonic device is based on a scientific sensory system technique providing a unique, and meaningful way, to encode, retain, and retrieve larger pieces of information, especially character word traits in the form of lists like characteristics, steps, and stages, whereby a beneficial use of the system is obtained.
    Type: Grant
    Filed: April 22, 2022
    Date of Patent: October 1, 2024
    Inventor: Diane Zissu
  • Patent number: 12106751
    Abstract: An automatic speech sensitivity adjustment feature is provided. The described sensitivity feature can enable an automatic system adjustment of a sensitivity level based on the number and type of determined speech errors. The sensitivity level determines how sensitive the sensitivity feature will be when indicating speech errors. The sensitivity feature can receive audio input comprising one or more spoken words and determine speech errors for the audio input using at least a sensitivity level. The sensitivity feature can determine whether an amount and type of the speech errors requires an adjustment to the sensitivity level. The sensitivity feature can adjust the sensitivity level to a second sensitivity level based on the amount and type of the speech errors, where the second sensitivity level is a different level than the sensitivity level. The sensitivity feature can re-determine the speech errors for the audio input using at least the second sensitivity level.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: October 1, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Tholfsen, Paul Ronald Ray, Daniel Edward McAllister, Hernán David Maestre Piedrahita
  • Patent number: 12097424
    Abstract: One or more programmable electronic interfaces made up of a tile, a sensor, or multiple tiles and/or sensors in a network communicate with a computational device to allow users to build programs to be played interactively on the interfaces. The interfaces can react to a user applying pressure to, contacting with the interface or otherwise triggering a sensor (such as motion, thermal, touch, pressure, or other detected interactions). The computational device provides a macro-scale interface of different types. The system communicates (either wirelessly or via a wired connection) with the computing device in order to send and receive information. The electronic interfaces can be tiles with interactive lights and/or sensors that can be programmed into games, designs, animations, light shows, musical instruments, dance routine steps, etc.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: September 24, 2024
    Assignee: Unruly Studios, Inc.
    Inventors: Bryanne Leeming, Daniel Ozick, Amon Millner, David Kunitz
  • Patent number: 12053883
    Abstract: A method for toy robot programming, the toy robot including a set of sensors, the method including, at a user device remote from the toy robot: receiving sensor measurements from the toy robot during physical robot manipulation; in response to detecting a programming trigger event, automatically converting the sensor measurements into a series of puppeted programming inputs; and displaying graphical representations of the set of puppeted programming inputs on a programming interface application on the user device.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: August 6, 2024
    Assignee: Wonder Workshop, Inc.
    Inventors: Saurabh Gupta, Vikas Gupta
  • Patent number: 12050874
    Abstract: A system and method that translates sentences of natural language text into sets of axioms of formal logic that are consistent with parses resulting from NLP and acquired constrains as they accumulate. The system and method further present these axioms so as to facilitate further disambiguation of such sentences and produces axioms of formal logic suitable for processing by automated reasoning technologies, such as first-order or description logic suitable for processing by various reasoning algorithms, such as logic programs, inference engines, theorem provers, and rule-based systems.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: July 30, 2024
    Inventor: Paul V. Haley
  • Patent number: 12034996
    Abstract: This application provides a video playing method performed by a computer device. The method includes: playing a target video in a playing interface; when a video picture of the target video comprises a target object and there is a text associated with the target object, displaying the text in a display region associated with a face posture of the target object in a process of playing the target video; and adjusting a display posture of the text with the face posture when the face posture of the target object changes in a process of displaying the text.
    Type: Grant
    Filed: November 2, 2022
    Date of Patent: July 9, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Fasheng Chen
  • Patent number: 11925875
    Abstract: An interactive electronic toy and a method for providing a challenge to a user using the toy. The toy includes a housing and a plurality of buttons, each button associated with at least one illuminator and with at least one pressure sensor. The method includes receiving a selection of a specific challenge, said specific challenge being selected from a challenge repository associated with the interactive electronic toy, and providing to the user instructions for completing said specific challenge. The method further includes receiving a challenge response from the user, in which the user responds to said specific challenge by at least one of moving the housing of the interactive electronic toy and depressing at least one of the plurality of buttons, providing feedback to the user, based on said received challenge response.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: March 12, 2024
    Assignee: FLYCATCHER CORP LTD
    Inventors: Shay Chen, Shachar Limor
  • Patent number: 11925416
    Abstract: A method of performing an eye examination test for examining eyes of a user, said method using a computing device as well as an input tool, wherein said computing device comprises a screen and comprises a camera unit arranged for capturing images, said method comprising the steps of capturing, by said camera unit of said computing device, at least one image of a human face of said user facing said screen, detecting, by said computing device, in said at least one captured image, both pupils of said human face, determining, by said computing device, a distance of said user to said screen based on a predetermined distance between pupils of a user, a distance between said detected pupils in said at least one captured image, and a focal length of said camera unit corresponding to said at least one captured image, and performing, by said computing device in combination with said input tool, said eye examination test using said determined distance.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: March 12, 2024
    Assignee: EASEE HEALTH B.V.
    Inventors: Yves Franco Diano Maria Prevoo, Elly Onesmo Nkya
  • Patent number: 11921914
    Abstract: The present disclosure relates to methods and systems for providing virtual environments that are responsively adaptable to users' characteristics. Embodiments provide for identifying a virtual action to be performed by a virtual representation of a patient within a virtual environment. The virtual action corresponds to a physical action by a physical limb of the patient in the real-world. In embodiments, the virtual action is required to be performed to a first target area within the virtual environment. Embodiments determine that the patient has at least one limitation that limits the patient performing the physical action. A determination of whether the patient has performed the physical action to at least a physical threshold associated is made. The virtual environment is caused to adapt in order to allow the virtual action to be performed in response to the determination that the patient has performed the physical action to the physical threshold.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: March 5, 2024
    Assignee: Neuromersive, Inc.
    Inventor: Veena Somareddy
  • Patent number: 11915612
    Abstract: The disclosed embodiments relate to improved learning methods and systems incorporating multisensory feedback. In some embodiments, virtual puzzle pieces represented by letters, numbers, or words, may animate in conjunction with phonetic sounds (e.g., as to letters or letter combinations) and pronunciation readings (e.g., as to words and numbers) when selected by a user. The animations and audio soundings may be coordinated to better inculcate the correspondence between graphical icons and corresponding vocalizations in the user.
    Type: Grant
    Filed: May 31, 2021
    Date of Patent: February 27, 2024
    Assignee: Originator Inc.
    Inventors: Emily Modde, Joshua Ruthnick, Joseph Ghazal, Rex Ishibashi
  • Patent number: 11842718
    Abstract: An unambiguous phonics system (UPS) is capable of presenting text in a format with unambiguous pronunciation. The system can translate input text written in a given language (e.g., English) into a UPS representation of the text written in a UPS alphabet. A unique UPS grapheme can be used to represent each unique grapheme-phoneme combination in the input text. Thus, each letter of the input text is represented in the UPS spelling and each letter of the UPS spelling unambiguously indicates the phoneme used. For all the various grapheme-phoneme combinations for a given input grapheme, the corresponding UPS graphemes can be constructed to have visual similarity with the given input grapheme, thus easing an eventual transition from UPS spelling to traditional spelling. The UPS can include translation, complexity scoring, word/phoneme-grapheme searching, and other module. The UPS can also include techniques to provide efficient, level-based training of the UPS alphabet.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: December 12, 2023
    Assignee: TINYIVY, INC.
    Inventor: Zachary Silverzweig
  • Patent number: 11823589
    Abstract: Methods, computer program products, and systems are presented. The method, computer program products, and systems can include, for instance: providing to a student user prompting data, wherein the prompting data prompts the student user to enter into an electronic teaching device voice data defining a correct pronunciation for a certain alphabet letter of a language alphabet, and wherein the prompting data prompts the student user to electronically enter handwritten data into the electronic teaching device defining a correct drawing of the certain alphabet letter; and examining response data received from the student user in response to the prompting data.
    Type: Grant
    Filed: July 29, 2019
    Date of Patent: November 21, 2023
    Assignee: International Business Machines Corporation
    Inventor: Rajarshi Das
  • Patent number: 11749131
    Abstract: Reading comprehension of a user can be assessed by presenting, in a graphical user interface, sequential reading text comprising a plurality of passages. The graphical user interface can alternate between (i) automatically advancing through passages of the reading text and (ii) manually advancing through passages of the reading text within the graphical user interface which is in response to user-generated input received via the graphical user interface. An audio narration is provided during the automatic advancing of the reading text. An audio file is recorded during the manual advancing of the reading text which is used to automatically determine an estimated level of reading comprehension of the user. Data characterizing the determined level of reading comprehension of the user can then be provided (e.g., displayed, loaded into memory, stored on a hard drive, transmitted to a remote computing system, etc.). Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: September 5, 2023
    Assignee: Educational Testing Service
    Inventors: Beata Beigman Klebanov, Anastassia Loukina, Nitin Madnani, John Sabatini, Jennifer Lentini
  • Patent number: 11726556
    Abstract: The present disclosure relates to methods and systems for providing virtual environments that are responsively adaptable to users' characteristics. Embodiments provide for identifying a virtual action to be performed by a virtual representation of a patient within a virtual environment. The virtual action corresponds to a physical action by a physical limb of the patient in the real-world. In embodiments, the virtual action is required to be performed to a first target area within the virtual environment. Embodiments determine that the patient has at least one limitation that limits the patient performing the physical action. A determination of whether the patient has performed the physical action to at least a physical threshold associated is made. The virtual environment is caused to adapt in order to allow the virtual action to be performed in response to the determination that the patient has performed the physical action to the physical threshold.
    Type: Grant
    Filed: February 21, 2022
    Date of Patent: August 15, 2023
    Assignee: Neuromersive, Inc.
    Inventor: Veena Somareddy
  • Patent number: 11645939
    Abstract: A computer-based method and system for teaching educational sketching includes receiving a user-generated image created in response to a learning assignment and comparing the user-generated image to a solution image to identify one or more errors in the user-generated image relative to the solution image, where the errors may include additional image elements and missing image elements. Comparing is performed by providing a solution region corresponding to an acceptable variation from the solution image and identifying one or more errors based on a presence or absence of at least a portion of a corresponding element of the user-generated image within the solution region. If errors are identified and a non-passing status is determined, a hint is displayed to the user. The hint may be the correct elements of the user-generated image, a portion of the solution image, or a combination thereof.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: May 9, 2023
    Assignees: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, NATIONAL SCIENCE FOUNDATION
    Inventor: Nathan Delson
  • Patent number: 11633668
    Abstract: A communication device, method, and computer program product prompt correct face/eye positioning to enable perceived eye-to-eye contact of a user of a video capturing device with camera on a same device side as the viewable display device. A first communication device includes a first display device having a first graphical user interface (GUI). A first image capturing device of the first communication device has a field of view that captures a face of a first user viewing the first GUI. The first image capturing device generates a first image stream of the field of view. A controller of the communication device identifies a look target area of the first GUI proximate to the first image capturing device. The controller presents visual content on the first GUI within the look target area to prompt the first user viewing the first GUI to look towards the look target area.
    Type: Grant
    Filed: October 24, 2020
    Date of Patent: April 25, 2023
    Assignee: Motorola Mobility LLC
    Inventors: Chao Ma, Miao Song, Kevin Dao, Zhicheng Fu, Vivek Kumar Tyagi
  • Patent number: 11605309
    Abstract: A three-dimensional educational tool that demonstrates periodicity in science and mathematics and a method of using same. The educational tool has a frame with a top template, such as the periodic table of the chemical elements, having a plurality of openings and a plurality of members, representing chemical elements as an example embodiment disposed in the openings. The educational tool provides a plurality of trend blocks that rapidly change a three dimensions display of a plurality of members by selectively raising or lowering the members to demonstrate a particular characteristic of a member in relationship to another member. The trend blocks sit under the frame and the members slide up and down accordingly.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: March 14, 2023
    Inventor: Christina Wright
  • Patent number: 11574558
    Abstract: The present disclosure is technology providing a foreign language learning application through game contents through a user terminal (smart phone, tablet, PC, or the like), and provides a foreign language word and a meaning thereof based on a preset language provided in an application together. Therefore, it is possible to provide a foreign language learning effect in which the user becomes a subject and learns the foreign language word while actively participating in the game by increasing ‘entertainment’ and ‘participation’ rather than a passive method for learning to write and listen to foreign language alphabets, simply and repeatedly. In addition, it is possible to provide an application that maximizes the effect of learning by repeatedly providing a word that has been encountered once so as not to forget by providing repeated learning based on an Ebbinghaus forgetting curve.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: February 7, 2023
    Inventor: Allis Seungeun Nam
  • Patent number: 11568139
    Abstract: Implementations relate to determining a secondary language proficiency measure, for a user in a secondary language (i.e., a language other than a primary language specified for the user), where determining the secondary language proficiency measure is based on past interactions of the user that are related to the secondary language. Those implementations further relate to utilizing the determined secondary language proficiency measure to increase efficiency of user interaction(s), such as interaction(s) with a language learning application and/or an automated assistant. Some of those implementations utilize the secondary language proficiency measure in automatically setting value(s), biasing automatic speech recognition, and/or determining how to render natural language output.
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: January 31, 2023
    Assignee: Google LLC
    Inventors: David Kogan, Wangqing Yuan, Guanglei Wang, Vincent Lacey, Wei Chen, Shaun Post
  • Patent number: 11562663
    Abstract: A machine-delivered aural cloze exercise makes use of natural, connected speech and allows for a portion of the audio to be selected and obfuscated during playback, creating an aural cloze portion. The aural cloze portion is extended beyond its natural length an effective amount to make the exercise clear to the user. If the audio is accompanied by video, the video is extended uniformly during the aural cloze portion, and optionally, can also be obfuscated during the aural cloze portion.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: January 24, 2023
    Inventor: John Nicholas DuQuette
  • Patent number: 11556174
    Abstract: A monitoring system incorporates, and method and computer program product provide a contextual gaze detection based user selection for prompting caregiver actions by a person with ambulatory and communication deficits or limitations. A controller of the monitoring system receives receiving at least one image stream from a camera system comprising at least one image capturing device. The camera system captures a first image stream that encompasses eyes of a person and a second image stream that at least partially encompasses surrounding object(s) and surface(s) viewable by the person. The controller determines an eye gaze direction of the person and a region of interest (ROI) aligned with the eye gaze direction. The controller identifies an object and associated caregiver action contained within the ROI. The controller communicates a notification to an output device, which presents to a second person, an indication of interest by the person in the caregiver action.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: January 17, 2023
    Assignee: Motorola Mobility LLC
    Inventors: Amit Kumar Agrawal, Rahul Bharat Desai
  • Patent number: 11538354
    Abstract: A method for using a reading teaching aid assembly includes aligning a vowel of a vowel card with a starting cell of a starting block, the starting cell defining a starting consonant; pronouncing the starting consonant and the vowel together; sliding the vowel card along a vowel track, the starting block and an ending block extending along the vowel track; aligning the vowel with an ending cell of the ending block, the ending cell defining an ending consonant; and pronouncing the ending consonant.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: December 27, 2022
    Inventor: Kevin Scott Whitehead
  • Patent number: 11386805
    Abstract: Aspects of the present disclosure relate to enhancing reading retention of users reading electronic text. A set of user data associated with a user currently reading electronic text on a device is received, the set of user data indicative of a reading retention of the user. The set of user data is analyzed to determine whether a retention enhancement action should be issued. In response to a determination that a retention action should be issued, the retention enhancement action is issued at the device the user is currently reading electronic text on.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: July 12, 2022
    Assignee: International Business Machines Corporation
    Inventors: Reinhold Geiselhart, Frank Küster, Vassil Radkov Dimov, Zalina Baysarova, Iliyana Ivanova
  • Patent number: 11380214
    Abstract: Aspects of the present disclosure relate to enhancing reading retention of users reading electronic text. A set of user data associated with a user currently reading electronic text on a device is received, the set of user data indicative of a reading retention of the user. The set of user data is analyzed to determine whether a retention enhancement action should be issued. In response to a determination that a retention action should be issued, the retention enhancement action is issued at the device the user is currently reading electronic text on.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: July 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Reinhold Geiselhart, Frank Küster, Vassil Radkov Dimov, Zalina Baysarova, Iliyana Ivanova
  • Patent number: 11289114
    Abstract: A content reproducer according to the present disclosure includes a sound collector configured to collect a speech, and a controller configured to obtain speech input direction information about the speech and determine a content output direction based on the speech input direction information. Alternatively, a content reproducer according to the present disclosure includes a communicator configured to obtain speech input direction information, and a controller configured to determine a content output direction based on the speech input direction information.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: March 29, 2022
    Assignee: YAMAHA CORPORATION
    Inventor: Akihiko Suyama
  • Patent number: 11210964
    Abstract: A learning tool and method are disclosed. The method which, when executed by a computing device comprising a display, an image capturing device, and a processor, causes the computing device to perform the steps of: generating an interactive first visual cue; displaying the visual cue in a visual cue area on the display; capturing real time footage of a user using the image capturing device while the user interacts with the computing device, and display the real time footage of the user in a video footage area on the display; generating interactive visual content associated with the first visual cue; and displaying the interactive visual content in a visual display area of the display.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: December 28, 2021
    Assignee: KINEPHONICS IP PTY LIMITED
    Inventor: Anna Gill
  • Patent number: 11139066
    Abstract: The present disclosure relates to methods and tools for enhancing cognition in an individual. The methods involve presenting to the individual multiple sets of stimuli. Each set of the multiple set contains two or more stimuli and at least one set of the multiple sets contains a target stimulus. The method then receives an input from the individual, and informs the individual as to whether the input is a correct response. The methods encompass iterations of stimuli presentation, receiving of the input, and lastly, generation of feedback to the individual until the individual learns and retains what the target stimulus is.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: October 5, 2021
    Assignee: The Regents of the University of California
    Inventors: Etienne de Villers-Sidani, Xiaoming Zhou, Jyoti Mishra-Ramanathan, Michael Merzenich, Adam Gazzaley
  • Patent number: 10984676
    Abstract: A method of facilitating learning of correspondence between one or more letters and one or more speech characteristics, is disclosed. The method may include receiving, using a processing device, at least one letter indicator corresponding to at least one letter. Further, the method may include analyzing, using the processing device, the at least one letter indicator. Further, the method may include identifying, using the processing device, at least one orientation indicator corresponding to the at least one letter indicator based on the analyzing. Further, the method may include displaying, using a display device, the at least one letter with the at least one orientation in relation to a reference axis based on the at least one orientation indicator. Further, the displaying of the at least one letter may be representative of at least one speech characteristic associated with the at least one letter.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: April 20, 2021
    Inventor: Yoonsung Cho
  • Patent number: 10928967
    Abstract: An information processing apparatus includes: a touch panel; a memory; a first processor coupled to the memory and the first processor configured to: acquire coordinates of touch input in an input surface of the touch panel; determine a direction of the touch input and a movement distance from a start point to an end point of the touch input; and determine an operation content with respect to the touch panel based on at least one of the direction and the movement distance of the touch input.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: February 23, 2021
    Assignee: FUJITSU COMPONENT LIMITED
    Inventor: Kenichi Fujita
  • Patent number: 10909868
    Abstract: This disclosure generally covers systems and methods that provide guidance to create an electronic survey. In some embodiments, the systems and methods identify and provide a suggested survey topic—with a corresponding option to create an electronic survey—based on user input. In some embodiments, the systems and methods identify and provide one or more suggested electronic survey questions based on user input. In such embodiments, the systems and methods provide, for example, components of suggested electronic survey questions, previously composed and benchmarking electronic survey questions, or suggested revisions to electronic survey questions. In addition, the systems and methods can provide entire premade electronic surveys based on receiving user input from a survey administrator.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: February 2, 2021
    Assignee: QUALTRICS, LLC
    Inventors: Jared Smith, Milind Kopikare, Daryl R Pinkal, Oliver M Hall, Daan Lindhout
  • Patent number: 10796602
    Abstract: A machine-delivered aural cloze exercise makes use of natural, connected speech and allows for a portion of the audio to be selected and obfuscated during playback, creating an aural cloze portion. The aural cloze portion is extended beyond its natural length an effective amount to make the exercise clear to the user. If the audio is accompanied by video, the video is extended uniformly during the aural cloze portion, and optionally, can also be obfuscated during the aural cloze portion.
    Type: Grant
    Filed: May 27, 2019
    Date of Patent: October 6, 2020
    Inventor: John Nicholas DuQuette
  • Patent number: 10786184
    Abstract: A method for determining hearing thresholds in the absence of pure-tone testing includes an adaptive speech recognition test which includes a calibrated item pool, each item having a difficulty value (di) and associated standard error (se). Test items are administered to a group of individuals having mild to moderately severe hearing loss to obtain responses. The responses are subjected to multiple regression statistical methods to develop an equation for converting an individual's test score to hearing threshold values at several frequencies. The adaptive speech recognition test can be administered to an individual and their score can be converted into hearing thresholds at each of the several frequencies by use of the equation.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: September 29, 2020
    Assignee: Rochester Institute of Technology
    Inventors: Joseph H. Bochner, Wayne M. Garrison
  • Patent number: 10783873
    Abstract: Systems and methods for identifying a person's native language, are presented. A native language identification system, comprising a plurality of artificial neural networks, such as time delay deep neural networks, is provided. Respective artificial neural networks of the plurality of artificial neural networks are trained as universal background models, using separate native language and non-native language corpora. The artificial neural networks may be used to perform voice activity detection and to extract sufficient statistics from the respective language corpora. The artificial neural networks may use the sufficient statistics to estimate respective T-matrices, which may in turn be used to extract respective i-vectors. The artificial neural networks may use i-vectors to generate a multilayer perceptron model, which may be used to identify a person's native language, based on an utterance by the person in his or her non-native language.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: September 22, 2020
    Assignee: Educational Testing Service
    Inventors: Yao Qian, Keelan Evanini, Patrick Lange, Robert A. Pugh, Rutuja Ubale
  • Patent number: 10637981
    Abstract: Each user of a telecommunications system may speak and record their own name, in their own voice, and a recording of their spoken name may subsequently be accessed by or delivered to other users of the system, thereby facilitating communication between users by enabling users to better know how to pronounce the names of other users. A user may listen to the recorded spoken name of another user before placing a call to another user. When a user joins a conference call, their spoken name may be announced to other users (attendees) already in the call. A user joining a conference call may listen to the recorded spoken names of attendees in the call. A button on users' phones may invoke these features.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: April 28, 2020
    Assignee: UNIFY GMBH & CO. KG
    Inventor: Cansu Manav
  • Patent number: 10446056
    Abstract: An audiovisual cueing system includes a visual game focusing on the fifteen vowel sounds of American English. Players take spoken turns corresponding with a sound-based word pattern determined by cards in play. Each card includes a color border, image, and featured word. The stressed vowel sounds in the color and object guide players to use the same sound in the underlined part of the featured word despite different spelling patterns. Players compare colors on cards in hand with a discard pile card, and if there is a match, the matched card is discarded and six corresponding words spoken in succession (e.g., “blue moon soon, blue moon June”). The act of speaking these words in succession provides a moment of learning and practice that benefits the player, while the game objective (winning) compels the learner to persist. The first player to discard all cards in hand is awarded points or wins.
    Type: Grant
    Filed: March 4, 2016
    Date of Patent: October 15, 2019
    Assignee: English Language Training Solutions LLC
    Inventors: Karen Ann Taylor, Shirley Thompson, Laura McIndoo
  • Patent number: 10427295
    Abstract: A method for toy robot programming, the toy robot including a set of sensors, the method including, at a user device remote from the toy robot: receiving sensor measurements from the toy robot during physical robot manipulation; in response to detecting a programming trigger event, automatically converting the sensor measurements into a series of puppeted programming inputs; and displaying graphical representations of the set of puppeted programming inputs on a programming interface application on the user device.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: October 1, 2019
    Assignee: Play-i, Inc.
    Inventors: Saurabh Gupta, Vikas Gupta
  • Patent number: 10339899
    Abstract: A character string display method includes: acquiring character string data, the character string data being data corresponding to a to-be-displayed character string, and the character string including at least one character; analyzing the character string data to generate an analysis result, the analysis result including number of digit information related to the character string and digit sequence information of each character; and displaying, digit by digit according to the digit sequence information and the number of digit information, a picture set corresponding to each character, the picture set including at least one picture.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: July 2, 2019
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Yulong Wang
  • Patent number: 10304354
    Abstract: A machine-delivered aural cloze exercise makes use of natural, connected speech and allows for a portion of the audio to be selected and obfuscated during playback, creating an aural cloze portion. The aural cloze portion is extended beyond its natural length an effective amount to make the exercise clear to the user. If the audio is accompanied by video, the video is extended uniformly during the aural cloze portion, and optionally, can also be obfuscated during the aural cloze portion.
    Type: Grant
    Filed: May 16, 2016
    Date of Patent: May 28, 2019
    Inventor: John Nicholas DuQuette
  • Patent number: 10276063
    Abstract: A multiplication wheel is disclosed herein. One selects a number on a first wheel, selects a number on a second wheel and aligns a tab of yet a third wheel to that of the first wheel. The third and top wheel has a plurality of portals which are covered by numbers. One selects the cover with one of the numbers in the multiplication problem, leading to the product of the two selected and aligned numbers from the first and second wheel (one of which is redundantly uncovered on the top wheel) and thus finds the product situated beneath, showing through from the second wheel.
    Type: Grant
    Filed: November 16, 2015
    Date of Patent: April 30, 2019
    Inventor: Engracio Usi
  • Patent number: 10276059
    Abstract: A method of facilitating learning of speech sounds is disclosed. The method may include generating, using a processor, a first plurality of sound-letter cards corresponding to a first set of phonemes. Further, each sound-letter card may include a speech sound and a spelling pattern. Furthermore, the generating may be based on one or more of a first criterion and a second criterion. According to the first criterion, the spelling pattern corresponding to a speech sound of a letter may include letters corresponding to a phoneme associated with the letter followed by schwa sound. According to the second criterion, the spelling pattern may include a plurality of letters, located at an onset position, corresponding to a phoneme associated with the speech sound. The method may further include displaying, using a display device, one or more of the first plurality of sound-letter cards based on a predetermined lesson plan.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: April 30, 2019
    Inventor: Yoonsung Cho
  • Patent number: 10271006
    Abstract: An novel method of manually controlling concurrent audio data recording and playback machines using an electromechanical momentary switch, an electronic device capable of detecting changes in the state of said electromechanical momentary switch, an audio data recording device, and an audio data playback device, whereby said electronic device will, upon detecting each change of state in said electromechanical momentary switch, cause said audio data recording device to stop recording the current audio data sample and start recording a new audio data sample, and cause said audio data playback device to play back the audio data sample whose recording was just stopped.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: April 23, 2019
    Inventor: William Glenn Wardlow
  • Patent number: 10134301
    Abstract: A toy has one or more LED light sources positioned on the toy so the light source illuminates beyond the toy. A toy to be worn on the human finger comprises a body includes an anchoring portion for receiving or locating a finger or fingers. A finger puppet toy permits for at least one of a reading tool and a light source or an enhancement of other toys or writings. A LED or other light source is operated on the toy to interact with photo-luminescent ink and other inks. A photo-luminescent ink and other inks are pre-printed or included in a decoration on the surface of another item such as a book or other toy. The toy is a reading tool or a light source or an enhancement of other toys or writings. The LED light sources include a black light and may include other LED lights of other colors. The lights are connected to a circuit board and an integrated power source, which are connected to a switch encased on the toy.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: November 20, 2018
    Inventor: Hannah Faith Silver
  • Patent number: 10109217
    Abstract: A speech assessment device and method for a multisyllabic-word learning machine, and a method for visualizing continuous audio are provided. By performing the step of starting the assessment mode, the step of selecting words to be assessed, the step of choosing to play or record, the step of recording, the step of visualization (including the step of picking out fundamental frequency, the step of defining analysis point, the step of transforming polygonal lines, and the step of simplifying the polygonal lines), the step of repeating, and the step of assessment, the speech assessment device and method for a multisyllabic-word learning machine are capable of providing assistance in oral language learning, and capable of rehabilitating patients with hearing impairment through visual aids.
    Type: Grant
    Filed: March 27, 2016
    Date of Patent: October 23, 2018
    Assignees: CAPICLOUD TECHNOLOGY LIMITED
    Inventor: Ya-Mei Tseng