Spelling, Phonics, Word Recognition, Or Sentence Formation Patents (Class 434/167)
-
Patent number: 11925875Abstract: An interactive electronic toy and a method for providing a challenge to a user using the toy. The toy includes a housing and a plurality of buttons, each button associated with at least one illuminator and with at least one pressure sensor. The method includes receiving a selection of a specific challenge, said specific challenge being selected from a challenge repository associated with the interactive electronic toy, and providing to the user instructions for completing said specific challenge. The method further includes receiving a challenge response from the user, in which the user responds to said specific challenge by at least one of moving the housing of the interactive electronic toy and depressing at least one of the plurality of buttons, providing feedback to the user, based on said received challenge response.Type: GrantFiled: May 27, 2020Date of Patent: March 12, 2024Assignee: FLYCATCHER CORP LTDInventors: Shay Chen, Shachar Limor
-
Patent number: 11925416Abstract: A method of performing an eye examination test for examining eyes of a user, said method using a computing device as well as an input tool, wherein said computing device comprises a screen and comprises a camera unit arranged for capturing images, said method comprising the steps of capturing, by said camera unit of said computing device, at least one image of a human face of said user facing said screen, detecting, by said computing device, in said at least one captured image, both pupils of said human face, determining, by said computing device, a distance of said user to said screen based on a predetermined distance between pupils of a user, a distance between said detected pupils in said at least one captured image, and a focal length of said camera unit corresponding to said at least one captured image, and performing, by said computing device in combination with said input tool, said eye examination test using said determined distance.Type: GrantFiled: July 20, 2018Date of Patent: March 12, 2024Assignee: EASEE HEALTH B.V.Inventors: Yves Franco Diano Maria Prevoo, Elly Onesmo Nkya
-
Patent number: 11921914Abstract: The present disclosure relates to methods and systems for providing virtual environments that are responsively adaptable to users' characteristics. Embodiments provide for identifying a virtual action to be performed by a virtual representation of a patient within a virtual environment. The virtual action corresponds to a physical action by a physical limb of the patient in the real-world. In embodiments, the virtual action is required to be performed to a first target area within the virtual environment. Embodiments determine that the patient has at least one limitation that limits the patient performing the physical action. A determination of whether the patient has performed the physical action to at least a physical threshold associated is made. The virtual environment is caused to adapt in order to allow the virtual action to be performed in response to the determination that the patient has performed the physical action to the physical threshold.Type: GrantFiled: September 29, 2022Date of Patent: March 5, 2024Assignee: Neuromersive, Inc.Inventor: Veena Somareddy
-
Patent number: 11915612Abstract: The disclosed embodiments relate to improved learning methods and systems incorporating multisensory feedback. In some embodiments, virtual puzzle pieces represented by letters, numbers, or words, may animate in conjunction with phonetic sounds (e.g., as to letters or letter combinations) and pronunciation readings (e.g., as to words and numbers) when selected by a user. The animations and audio soundings may be coordinated to better inculcate the correspondence between graphical icons and corresponding vocalizations in the user.Type: GrantFiled: May 31, 2021Date of Patent: February 27, 2024Assignee: Originator Inc.Inventors: Emily Modde, Joshua Ruthnick, Joseph Ghazal, Rex Ishibashi
-
Patent number: 11842718Abstract: An unambiguous phonics system (UPS) is capable of presenting text in a format with unambiguous pronunciation. The system can translate input text written in a given language (e.g., English) into a UPS representation of the text written in a UPS alphabet. A unique UPS grapheme can be used to represent each unique grapheme-phoneme combination in the input text. Thus, each letter of the input text is represented in the UPS spelling and each letter of the UPS spelling unambiguously indicates the phoneme used. For all the various grapheme-phoneme combinations for a given input grapheme, the corresponding UPS graphemes can be constructed to have visual similarity with the given input grapheme, thus easing an eventual transition from UPS spelling to traditional spelling. The UPS can include translation, complexity scoring, word/phoneme-grapheme searching, and other module. The UPS can also include techniques to provide efficient, level-based training of the UPS alphabet.Type: GrantFiled: December 10, 2020Date of Patent: December 12, 2023Assignee: TINYIVY, INC.Inventor: Zachary Silverzweig
-
Patent number: 11823589Abstract: Methods, computer program products, and systems are presented. The method, computer program products, and systems can include, for instance: providing to a student user prompting data, wherein the prompting data prompts the student user to enter into an electronic teaching device voice data defining a correct pronunciation for a certain alphabet letter of a language alphabet, and wherein the prompting data prompts the student user to electronically enter handwritten data into the electronic teaching device defining a correct drawing of the certain alphabet letter; and examining response data received from the student user in response to the prompting data.Type: GrantFiled: July 29, 2019Date of Patent: November 21, 2023Assignee: International Business Machines CorporationInventor: Rajarshi Das
-
Patent number: 11749131Abstract: Reading comprehension of a user can be assessed by presenting, in a graphical user interface, sequential reading text comprising a plurality of passages. The graphical user interface can alternate between (i) automatically advancing through passages of the reading text and (ii) manually advancing through passages of the reading text within the graphical user interface which is in response to user-generated input received via the graphical user interface. An audio narration is provided during the automatic advancing of the reading text. An audio file is recorded during the manual advancing of the reading text which is used to automatically determine an estimated level of reading comprehension of the user. Data characterizing the determined level of reading comprehension of the user can then be provided (e.g., displayed, loaded into memory, stored on a hard drive, transmitted to a remote computing system, etc.). Related apparatus, systems, techniques and articles are also described.Type: GrantFiled: September 30, 2019Date of Patent: September 5, 2023Assignee: Educational Testing ServiceInventors: Beata Beigman Klebanov, Anastassia Loukina, Nitin Madnani, John Sabatini, Jennifer Lentini
-
Patent number: 11726556Abstract: The present disclosure relates to methods and systems for providing virtual environments that are responsively adaptable to users' characteristics. Embodiments provide for identifying a virtual action to be performed by a virtual representation of a patient within a virtual environment. The virtual action corresponds to a physical action by a physical limb of the patient in the real-world. In embodiments, the virtual action is required to be performed to a first target area within the virtual environment. Embodiments determine that the patient has at least one limitation that limits the patient performing the physical action. A determination of whether the patient has performed the physical action to at least a physical threshold associated is made. The virtual environment is caused to adapt in order to allow the virtual action to be performed in response to the determination that the patient has performed the physical action to the physical threshold.Type: GrantFiled: February 21, 2022Date of Patent: August 15, 2023Assignee: Neuromersive, Inc.Inventor: Veena Somareddy
-
Patent number: 11645939Abstract: A computer-based method and system for teaching educational sketching includes receiving a user-generated image created in response to a learning assignment and comparing the user-generated image to a solution image to identify one or more errors in the user-generated image relative to the solution image, where the errors may include additional image elements and missing image elements. Comparing is performed by providing a solution region corresponding to an acceptable variation from the solution image and identifying one or more errors based on a presence or absence of at least a portion of a corresponding element of the user-generated image within the solution region. If errors are identified and a non-passing status is determined, a hint is displayed to the user. The hint may be the correct elements of the user-generated image, a portion of the solution image, or a combination thereof.Type: GrantFiled: December 2, 2019Date of Patent: May 9, 2023Assignees: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, NATIONAL SCIENCE FOUNDATIONInventor: Nathan Delson
-
Patent number: 11633668Abstract: A communication device, method, and computer program product prompt correct face/eye positioning to enable perceived eye-to-eye contact of a user of a video capturing device with camera on a same device side as the viewable display device. A first communication device includes a first display device having a first graphical user interface (GUI). A first image capturing device of the first communication device has a field of view that captures a face of a first user viewing the first GUI. The first image capturing device generates a first image stream of the field of view. A controller of the communication device identifies a look target area of the first GUI proximate to the first image capturing device. The controller presents visual content on the first GUI within the look target area to prompt the first user viewing the first GUI to look towards the look target area.Type: GrantFiled: October 24, 2020Date of Patent: April 25, 2023Assignee: Motorola Mobility LLCInventors: Chao Ma, Miao Song, Kevin Dao, Zhicheng Fu, Vivek Kumar Tyagi
-
Patent number: 11605309Abstract: A three-dimensional educational tool that demonstrates periodicity in science and mathematics and a method of using same. The educational tool has a frame with a top template, such as the periodic table of the chemical elements, having a plurality of openings and a plurality of members, representing chemical elements as an example embodiment disposed in the openings. The educational tool provides a plurality of trend blocks that rapidly change a three dimensions display of a plurality of members by selectively raising or lowering the members to demonstrate a particular characteristic of a member in relationship to another member. The trend blocks sit under the frame and the members slide up and down accordingly.Type: GrantFiled: May 22, 2020Date of Patent: March 14, 2023Inventor: Christina Wright
-
Patent number: 11574558Abstract: The present disclosure is technology providing a foreign language learning application through game contents through a user terminal (smart phone, tablet, PC, or the like), and provides a foreign language word and a meaning thereof based on a preset language provided in an application together. Therefore, it is possible to provide a foreign language learning effect in which the user becomes a subject and learns the foreign language word while actively participating in the game by increasing ‘entertainment’ and ‘participation’ rather than a passive method for learning to write and listen to foreign language alphabets, simply and repeatedly. In addition, it is possible to provide an application that maximizes the effect of learning by repeatedly providing a word that has been encountered once so as not to forget by providing repeated learning based on an Ebbinghaus forgetting curve.Type: GrantFiled: April 12, 2022Date of Patent: February 7, 2023Inventor: Allis Seungeun Nam
-
Patent number: 11568139Abstract: Implementations relate to determining a secondary language proficiency measure, for a user in a secondary language (i.e., a language other than a primary language specified for the user), where determining the secondary language proficiency measure is based on past interactions of the user that are related to the secondary language. Those implementations further relate to utilizing the determined secondary language proficiency measure to increase efficiency of user interaction(s), such as interaction(s) with a language learning application and/or an automated assistant. Some of those implementations utilize the secondary language proficiency measure in automatically setting value(s), biasing automatic speech recognition, and/or determining how to render natural language output.Type: GrantFiled: June 18, 2021Date of Patent: January 31, 2023Assignee: Google LLCInventors: David Kogan, Wangqing Yuan, Guanglei Wang, Vincent Lacey, Wei Chen, Shaun Post
-
Patent number: 11562663Abstract: A machine-delivered aural cloze exercise makes use of natural, connected speech and allows for a portion of the audio to be selected and obfuscated during playback, creating an aural cloze portion. The aural cloze portion is extended beyond its natural length an effective amount to make the exercise clear to the user. If the audio is accompanied by video, the video is extended uniformly during the aural cloze portion, and optionally, can also be obfuscated during the aural cloze portion.Type: GrantFiled: October 5, 2020Date of Patent: January 24, 2023Inventor: John Nicholas DuQuette
-
Monitoring system having contextual gaze-detection-based selection/triggering of caregiver functions
Patent number: 11556174Abstract: A monitoring system incorporates, and method and computer program product provide a contextual gaze detection based user selection for prompting caregiver actions by a person with ambulatory and communication deficits or limitations. A controller of the monitoring system receives receiving at least one image stream from a camera system comprising at least one image capturing device. The camera system captures a first image stream that encompasses eyes of a person and a second image stream that at least partially encompasses surrounding object(s) and surface(s) viewable by the person. The controller determines an eye gaze direction of the person and a region of interest (ROI) aligned with the eye gaze direction. The controller identifies an object and associated caregiver action contained within the ROI. The controller communicates a notification to an output device, which presents to a second person, an indication of interest by the person in the caregiver action.Type: GrantFiled: March 30, 2022Date of Patent: January 17, 2023Assignee: Motorola Mobility LLCInventors: Amit Kumar Agrawal, Rahul Bharat Desai -
Patent number: 11538354Abstract: A method for using a reading teaching aid assembly includes aligning a vowel of a vowel card with a starting cell of a starting block, the starting cell defining a starting consonant; pronouncing the starting consonant and the vowel together; sliding the vowel card along a vowel track, the starting block and an ending block extending along the vowel track; aligning the vowel with an ending cell of the ending block, the ending cell defining an ending consonant; and pronouncing the ending consonant.Type: GrantFiled: June 12, 2018Date of Patent: December 27, 2022Inventor: Kevin Scott Whitehead
-
Patent number: 11386805Abstract: Aspects of the present disclosure relate to enhancing reading retention of users reading electronic text. A set of user data associated with a user currently reading electronic text on a device is received, the set of user data indicative of a reading retention of the user. The set of user data is analyzed to determine whether a retention enhancement action should be issued. In response to a determination that a retention action should be issued, the retention enhancement action is issued at the device the user is currently reading electronic text on.Type: GrantFiled: July 1, 2019Date of Patent: July 12, 2022Assignee: International Business Machines CorporationInventors: Reinhold Geiselhart, Frank Küster, Vassil Radkov Dimov, Zalina Baysarova, Iliyana Ivanova
-
Patent number: 11380214Abstract: Aspects of the present disclosure relate to enhancing reading retention of users reading electronic text. A set of user data associated with a user currently reading electronic text on a device is received, the set of user data indicative of a reading retention of the user. The set of user data is analyzed to determine whether a retention enhancement action should be issued. In response to a determination that a retention action should be issued, the retention enhancement action is issued at the device the user is currently reading electronic text on.Type: GrantFiled: February 19, 2019Date of Patent: July 5, 2022Assignee: International Business Machines CorporationInventors: Reinhold Geiselhart, Frank Küster, Vassil Radkov Dimov, Zalina Baysarova, Iliyana Ivanova
-
Patent number: 11289114Abstract: A content reproducer according to the present disclosure includes a sound collector configured to collect a speech, and a controller configured to obtain speech input direction information about the speech and determine a content output direction based on the speech input direction information. Alternatively, a content reproducer according to the present disclosure includes a communicator configured to obtain speech input direction information, and a controller configured to determine a content output direction based on the speech input direction information.Type: GrantFiled: May 30, 2019Date of Patent: March 29, 2022Assignee: YAMAHA CORPORATIONInventor: Akihiko Suyama
-
Patent number: 11210964Abstract: A learning tool and method are disclosed. The method which, when executed by a computing device comprising a display, an image capturing device, and a processor, causes the computing device to perform the steps of: generating an interactive first visual cue; displaying the visual cue in a visual cue area on the display; capturing real time footage of a user using the image capturing device while the user interacts with the computing device, and display the real time footage of the user in a video footage area on the display; generating interactive visual content associated with the first visual cue; and displaying the interactive visual content in a visual display area of the display.Type: GrantFiled: December 6, 2017Date of Patent: December 28, 2021Assignee: KINEPHONICS IP PTY LIMITEDInventor: Anna Gill
-
Patent number: 11139066Abstract: The present disclosure relates to methods and tools for enhancing cognition in an individual. The methods involve presenting to the individual multiple sets of stimuli. Each set of the multiple set contains two or more stimuli and at least one set of the multiple sets contains a target stimulus. The method then receives an input from the individual, and informs the individual as to whether the input is a correct response. The methods encompass iterations of stimuli presentation, receiving of the input, and lastly, generation of feedback to the individual until the individual learns and retains what the target stimulus is.Type: GrantFiled: May 29, 2020Date of Patent: October 5, 2021Assignee: The Regents of the University of CaliforniaInventors: Etienne de Villers-Sidani, Xiaoming Zhou, Jyoti Mishra-Ramanathan, Michael Merzenich, Adam Gazzaley
-
Patent number: 10984676Abstract: A method of facilitating learning of correspondence between one or more letters and one or more speech characteristics, is disclosed. The method may include receiving, using a processing device, at least one letter indicator corresponding to at least one letter. Further, the method may include analyzing, using the processing device, the at least one letter indicator. Further, the method may include identifying, using the processing device, at least one orientation indicator corresponding to the at least one letter indicator based on the analyzing. Further, the method may include displaying, using a display device, the at least one letter with the at least one orientation in relation to a reference axis based on the at least one orientation indicator. Further, the displaying of the at least one letter may be representative of at least one speech characteristic associated with the at least one letter.Type: GrantFiled: January 11, 2019Date of Patent: April 20, 2021Inventor: Yoonsung Cho
-
Patent number: 10928967Abstract: An information processing apparatus includes: a touch panel; a memory; a first processor coupled to the memory and the first processor configured to: acquire coordinates of touch input in an input surface of the touch panel; determine a direction of the touch input and a movement distance from a start point to an end point of the touch input; and determine an operation content with respect to the touch panel based on at least one of the direction and the movement distance of the touch input.Type: GrantFiled: November 12, 2019Date of Patent: February 23, 2021Assignee: FUJITSU COMPONENT LIMITEDInventor: Kenichi Fujita
-
Patent number: 10909868Abstract: This disclosure generally covers systems and methods that provide guidance to create an electronic survey. In some embodiments, the systems and methods identify and provide a suggested survey topic—with a corresponding option to create an electronic survey—based on user input. In some embodiments, the systems and methods identify and provide one or more suggested electronic survey questions based on user input. In such embodiments, the systems and methods provide, for example, components of suggested electronic survey questions, previously composed and benchmarking electronic survey questions, or suggested revisions to electronic survey questions. In addition, the systems and methods can provide entire premade electronic surveys based on receiving user input from a survey administrator.Type: GrantFiled: July 6, 2020Date of Patent: February 2, 2021Assignee: QUALTRICS, LLCInventors: Jared Smith, Milind Kopikare, Daryl R Pinkal, Oliver M Hall, Daan Lindhout
-
Patent number: 10796602Abstract: A machine-delivered aural cloze exercise makes use of natural, connected speech and allows for a portion of the audio to be selected and obfuscated during playback, creating an aural cloze portion. The aural cloze portion is extended beyond its natural length an effective amount to make the exercise clear to the user. If the audio is accompanied by video, the video is extended uniformly during the aural cloze portion, and optionally, can also be obfuscated during the aural cloze portion.Type: GrantFiled: May 27, 2019Date of Patent: October 6, 2020Inventor: John Nicholas DuQuette
-
Patent number: 10786184Abstract: A method for determining hearing thresholds in the absence of pure-tone testing includes an adaptive speech recognition test which includes a calibrated item pool, each item having a difficulty value (di) and associated standard error (se). Test items are administered to a group of individuals having mild to moderately severe hearing loss to obtain responses. The responses are subjected to multiple regression statistical methods to develop an equation for converting an individual's test score to hearing threshold values at several frequencies. The adaptive speech recognition test can be administered to an individual and their score can be converted into hearing thresholds at each of the several frequencies by use of the equation.Type: GrantFiled: November 2, 2017Date of Patent: September 29, 2020Assignee: Rochester Institute of TechnologyInventors: Joseph H. Bochner, Wayne M. Garrison
-
Patent number: 10783873Abstract: Systems and methods for identifying a person's native language, are presented. A native language identification system, comprising a plurality of artificial neural networks, such as time delay deep neural networks, is provided. Respective artificial neural networks of the plurality of artificial neural networks are trained as universal background models, using separate native language and non-native language corpora. The artificial neural networks may be used to perform voice activity detection and to extract sufficient statistics from the respective language corpora. The artificial neural networks may use the sufficient statistics to estimate respective T-matrices, which may in turn be used to extract respective i-vectors. The artificial neural networks may use i-vectors to generate a multilayer perceptron model, which may be used to identify a person's native language, based on an utterance by the person in his or her non-native language.Type: GrantFiled: December 17, 2018Date of Patent: September 22, 2020Assignee: Educational Testing ServiceInventors: Yao Qian, Keelan Evanini, Patrick Lange, Robert A. Pugh, Rutuja Ubale
-
Patent number: 10637981Abstract: Each user of a telecommunications system may speak and record their own name, in their own voice, and a recording of their spoken name may subsequently be accessed by or delivered to other users of the system, thereby facilitating communication between users by enabling users to better know how to pronounce the names of other users. A user may listen to the recorded spoken name of another user before placing a call to another user. When a user joins a conference call, their spoken name may be announced to other users (attendees) already in the call. A user joining a conference call may listen to the recorded spoken names of attendees in the call. A button on users' phones may invoke these features.Type: GrantFiled: November 15, 2018Date of Patent: April 28, 2020Assignee: UNIFY GMBH & CO. KGInventor: Cansu Manav
-
Patent number: 10446056Abstract: An audiovisual cueing system includes a visual game focusing on the fifteen vowel sounds of American English. Players take spoken turns corresponding with a sound-based word pattern determined by cards in play. Each card includes a color border, image, and featured word. The stressed vowel sounds in the color and object guide players to use the same sound in the underlined part of the featured word despite different spelling patterns. Players compare colors on cards in hand with a discard pile card, and if there is a match, the matched card is discarded and six corresponding words spoken in succession (e.g., “blue moon soon, blue moon June”). The act of speaking these words in succession provides a moment of learning and practice that benefits the player, while the game objective (winning) compels the learner to persist. The first player to discard all cards in hand is awarded points or wins.Type: GrantFiled: March 4, 2016Date of Patent: October 15, 2019Assignee: English Language Training Solutions LLCInventors: Karen Ann Taylor, Shirley Thompson, Laura McIndoo
-
Patent number: 10427295Abstract: A method for toy robot programming, the toy robot including a set of sensors, the method including, at a user device remote from the toy robot: receiving sensor measurements from the toy robot during physical robot manipulation; in response to detecting a programming trigger event, automatically converting the sensor measurements into a series of puppeted programming inputs; and displaying graphical representations of the set of puppeted programming inputs on a programming interface application on the user device.Type: GrantFiled: June 28, 2017Date of Patent: October 1, 2019Assignee: Play-i, Inc.Inventors: Saurabh Gupta, Vikas Gupta
-
Patent number: 10339899Abstract: A character string display method includes: acquiring character string data, the character string data being data corresponding to a to-be-displayed character string, and the character string including at least one character; analyzing the character string data to generate an analysis result, the analysis result including number of digit information related to the character string and digit sequence information of each character; and displaying, digit by digit according to the digit sequence information and the number of digit information, a picture set corresponding to each character, the picture set including at least one picture.Type: GrantFiled: September 28, 2016Date of Patent: July 2, 2019Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Yulong Wang
-
Patent number: 10304354Abstract: A machine-delivered aural cloze exercise makes use of natural, connected speech and allows for a portion of the audio to be selected and obfuscated during playback, creating an aural cloze portion. The aural cloze portion is extended beyond its natural length an effective amount to make the exercise clear to the user. If the audio is accompanied by video, the video is extended uniformly during the aural cloze portion, and optionally, can also be obfuscated during the aural cloze portion.Type: GrantFiled: May 16, 2016Date of Patent: May 28, 2019Inventor: John Nicholas DuQuette
-
Patent number: 10276063Abstract: A multiplication wheel is disclosed herein. One selects a number on a first wheel, selects a number on a second wheel and aligns a tab of yet a third wheel to that of the first wheel. The third and top wheel has a plurality of portals which are covered by numbers. One selects the cover with one of the numbers in the multiplication problem, leading to the product of the two selected and aligned numbers from the first and second wheel (one of which is redundantly uncovered on the top wheel) and thus finds the product situated beneath, showing through from the second wheel.Type: GrantFiled: November 16, 2015Date of Patent: April 30, 2019Inventor: Engracio Usi
-
Patent number: 10276059Abstract: A method of facilitating learning of speech sounds is disclosed. The method may include generating, using a processor, a first plurality of sound-letter cards corresponding to a first set of phonemes. Further, each sound-letter card may include a speech sound and a spelling pattern. Furthermore, the generating may be based on one or more of a first criterion and a second criterion. According to the first criterion, the spelling pattern corresponding to a speech sound of a letter may include letters corresponding to a phoneme associated with the letter followed by schwa sound. According to the second criterion, the spelling pattern may include a plurality of letters, located at an onset position, corresponding to a phoneme associated with the speech sound. The method may further include displaying, using a display device, one or more of the first plurality of sound-letter cards based on a predetermined lesson plan.Type: GrantFiled: October 18, 2016Date of Patent: April 30, 2019Inventor: Yoonsung Cho
-
Patent number: 10271006Abstract: An novel method of manually controlling concurrent audio data recording and playback machines using an electromechanical momentary switch, an electronic device capable of detecting changes in the state of said electromechanical momentary switch, an audio data recording device, and an audio data playback device, whereby said electronic device will, upon detecting each change of state in said electromechanical momentary switch, cause said audio data recording device to stop recording the current audio data sample and start recording a new audio data sample, and cause said audio data playback device to play back the audio data sample whose recording was just stopped.Type: GrantFiled: May 10, 2017Date of Patent: April 23, 2019Inventor: William Glenn Wardlow
-
Patent number: 10134301Abstract: A toy has one or more LED light sources positioned on the toy so the light source illuminates beyond the toy. A toy to be worn on the human finger comprises a body includes an anchoring portion for receiving or locating a finger or fingers. A finger puppet toy permits for at least one of a reading tool and a light source or an enhancement of other toys or writings. A LED or other light source is operated on the toy to interact with photo-luminescent ink and other inks. A photo-luminescent ink and other inks are pre-printed or included in a decoration on the surface of another item such as a book or other toy. The toy is a reading tool or a light source or an enhancement of other toys or writings. The LED light sources include a black light and may include other LED lights of other colors. The lights are connected to a circuit board and an integrated power source, which are connected to a switch encased on the toy.Type: GrantFiled: January 15, 2016Date of Patent: November 20, 2018Inventor: Hannah Faith Silver
-
Patent number: 10109217Abstract: A speech assessment device and method for a multisyllabic-word learning machine, and a method for visualizing continuous audio are provided. By performing the step of starting the assessment mode, the step of selecting words to be assessed, the step of choosing to play or record, the step of recording, the step of visualization (including the step of picking out fundamental frequency, the step of defining analysis point, the step of transforming polygonal lines, and the step of simplifying the polygonal lines), the step of repeating, and the step of assessment, the speech assessment device and method for a multisyllabic-word learning machine are capable of providing assistance in oral language learning, and capable of rehabilitating patients with hearing impairment through visual aids.Type: GrantFiled: March 27, 2016Date of Patent: October 23, 2018Assignees: CAPICLOUD TECHNOLOGY LIMITEDInventor: Ya-Mei Tseng
-
Patent number: 10099132Abstract: Moving image information is stored in a plurality of types in accordance with different actions of the subject, and a first moving image is switched to a second moving image by inserting an interpolation image that connects a frame image at the end of the first moving image to a frame image at the beginning of the second moving image. According to the present disclosure, a wide variety of actions performed by the subject can be expressed while using live-action moving images.Type: GrantFiled: May 16, 2014Date of Patent: October 16, 2018Assignee: SEGA SAMMY CREATION INC.Inventors: Yoshihiro Yamakawa, Akihiro Suzuki, Kazuya Suzuki, Kazutomo Sambongi
-
Patent number: 10071312Abstract: Moving image information is stored in a plurality of types in accordance with different actions of the subject, and a first moving image is switched to a second moving image by inserting an interpolation image that connects a frame image at the end of the first moving image to a frame image at the beginning of the second moving image. According to the present disclosure, a wide variety of actions performed by the subject can be expressed while using live-action moving images.Type: GrantFiled: May 16, 2014Date of Patent: September 11, 2018Assignee: SEGA SAMMY CREATION INC.Inventors: Yoshihiro Yamakawa, Akihiro Suzuki, Kazuya Suzuki, Kazutomo Sambongi
-
Patent number: 10074288Abstract: The discloser describes a method of displaying text as a reading training aid to activate a reader's comprehension monitoring. In embodiment, a user is presented with a title on a display, which disappears when text related to the title is shown to the user. After reading the text, the user must decide whether the title, which is no longer visible on the display, matched the content of the text. If a user answers incorrectly, they are presented with a waiting period before continuing. In other embodiments, the text of the paragraph is removed from the display before the user is asked to evaluate a summarization of the text.Type: GrantFiled: March 4, 2016Date of Patent: September 11, 2018Inventor: Jane Offutt
-
Patent number: 10046242Abstract: Memorization systems and methods are provided. The systems and methods include outputting, at a hardware platform, an image sequence that forms an environment to facilitate memorization and displaying a controllable object in the image sequence as continuously moving along a path, and controlling a position of the controllable object within each image of the image sequence, based on a user input. Moreover, item objects may be displayed in the image sequence as obstacles in the path to block the controllable object. The obstacles may contain information suitable for memorization. In addition, systems and methods may include outputting an image representing a negative indicator in response to the controllable object selecting an incorrect action, or outputting an image representing a positive indicator in response to movement of the controllable object selecting a correct action.Type: GrantFiled: August 28, 2015Date of Patent: August 14, 2018Assignee: SYRIAN AMERICAN INTELLECTUAL PROPERTY (SAIP), LLCInventor: Mehyar Abbas
-
Patent number: 10010284Abstract: A system for assessing a mental health disorder in a human subject, the system comprising: a display configured to display a series of natural test images to the subject; an input by which the subject can input a response, following the display of each test image, as to whether or not the test image satisfies a predetermined categorization criterion; a control processor configured to control the display of the test images by the display, to measure the duration of time from when each test image is initially displayed to when the corresponding response is input by the subject, and to generate a set of response data including the response times in respect of each of the test images; and a data processor configured to process the set of response data and to compare the processed response data with reference data to assess whether or not the subject has, or is likely to develop, the mental health disorder.Type: GrantFiled: November 5, 2014Date of Patent: July 3, 2018Assignee: COGNETIVITY LTD.Inventors: Seyed-Mahdi Khaligh-Razavi, Sina Habibi
-
Patent number: 9934306Abstract: Technologies are described herein for identifying query intent from a raw query. A method for identifying intent may include repeatedly separating and merging terms of a natural language expression based on a set of rule-based transpositions of natural language terms into one or more defined terms based on predetermined naming conventions for at least one software function. Thereafter, a cluster of previous search terms related to the defined terms may be identified, and the natural language expression may be associated with the identified cluster to create intent-based cluster information.Type: GrantFiled: May 12, 2014Date of Patent: April 3, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Meyyammai Subramanian, Sayed Hassan, Anneliese Creighton Wirth, Michelle Follosco Valeriano
-
Patent number: 9916397Abstract: Embodiments relate to generating a retrieval condition for retrieving a target character string from texts by pattern matching. An aspect includes dividing a first text into words. Another aspect includes generating a converted character string by performing at least one of appending at least one character in at least either one of previous and subsequent positions of the target character string. Another aspect includes replacing at least one character of the target character string. Another aspect includes generating the retrieval condition for retrieval candidates in the words of the first text, the retrieval condition comprising determining that a retrieval candidate matches the target character string and does not match the converted character string based on a ratio of a part of the retrieval candidate which matches the converted character string and corresponds to the target character string is less than or equal to a reference frequency.Type: GrantFiled: November 9, 2016Date of Patent: March 13, 2018Assignee: International Business Machines CorporationInventors: Emiko Takeuchi, Daisuke Takuma, Hirobumi Toyoshima
-
Patent number: 9801570Abstract: Generally, a method performed by one or more processing devices includes generating a graphical user interface that when rendered on a display of the one or more processing devices renders a visual representation of an environment and a visual representation of an object in the environment; retrieving an auditory stimulus with one or more auditory attributes indicative of a location of a virtual target in the environment; receiving information specifying movement of the object in the environment; determining, based on the movement of the object, a proximity of the object to the virtual target; adjusting, based on the proximity, one or more values of the one or more auditory attributes of the auditory stimulus; and causing the one or more processing devices to play the auditory stimulus using the adjusted one or more values.Type: GrantFiled: June 22, 2012Date of Patent: October 31, 2017Assignee: Massachusetts Eye & Ear InfirmaryInventors: Daniel B. Polley, Kenneth E. Hancock
-
Patent number: 9747272Abstract: A computing device is described that outputs for display at a presence-sensitive screen, a graphical keyboard having keys. The computing device receives an indication of a selection of one or more of the keys. Based on the selection the computing device determines a character string from which the computing device determines one or more candidate words. Based at least in part on the candidate words and a plurality of features, the computing device determines a spelling probability that the character string represents an incorrect spelling of at least one candidate word. The plurality of features includes a spatial model probability associated with at least one of the candidate words. If the spelling probability satisfies a threshold, the computing device outputs for display the at least one candidate word.Type: GrantFiled: March 6, 2014Date of Patent: August 29, 2017Assignee: Google Inc.Inventors: Yu Ouyang, Shumin Zhai
-
Patent number: 9679496Abstract: Reverse Language Resonance methods are described for instructing a target language to a learner who speaks a native language. The methods may include providing to the learner a predetermined lesson comprising a lesson text of a plurality of lesson words that are exclusively in the target language. The methods may further include priming implicit memory of the learner. The methods may further include displaying the lesson text on a display and playing a recorded version of spoken words of the lesson text on an audio output while the lesson text is displayed. The methods may further include instructing the learner to perform Concurrent Triple Activity including simultaneously reading the lesson text on the display, listening to the spoken words from the audio output, and repeating the spoken words along with the recorded version into an audio input while the recorded version is playing.Type: GrantFiled: November 30, 2012Date of Patent: June 13, 2017Inventor: Arkady Zilberman
-
Patent number: 9671946Abstract: Content is displayed on a touchscreen display of a computing system such as an electronic book reader. The content is displayed according to a setting for a first attribute (e.g., level of brightness) and a setting for a second attribute (e.g., day mode or night mode). In response to sensing a motion proximate to the touchscreen, the setting for the first attribute is changed to a different value. In response to a value for the setting for the first attribute crossing a threshold value (e.g., while the motion is being performed), the setting for the second attribute is changed.Type: GrantFiled: February 6, 2014Date of Patent: June 6, 2017Assignee: RAKUTEN KOBO, INC.Inventors: Sneha Patel, Anthony O'Donoghue
-
Patent number: 9582913Abstract: Various embodiments enable a computing device to perform tasks such as highlighting words in an augmented reality view that are important to a user. For example, word lists can be generated and the user, by pointing a camera of a computing device at a volume of text, can cause words from the word list within the volume of text to be highlighted in a live field of view of the camera displayed thereon. Accordingly, users can quickly identify textual information that is meaningful to them in an Augmented Reality view to aid the user in sifting through real-world text.Type: GrantFiled: September 25, 2013Date of Patent: February 28, 2017Assignee: A9.com, Inc.Inventors: Adam Wiggen Kraft, Arnab Sanat Kumar Dhua, Douglas Ryan Gray, Xiaofan Lin, Yu Lou, Sunil Ramesh, Colin Jon Taylor, David Creighton Mott
-
Patent number: 9472113Abstract: A computing device may provide a visual cue to items of content (for example, words in a book) synchronized with the playback of companion content (for example, audio content corresponding to the book). For example, embodiments of the present disclosure are directed to a content playback synchronization system for use with physical books (or other physical media). In an embodiment, the computing device may display a visual cue (for example, an underline, box, dot, cursor, or the like) to identify a current location in textual content of the physical book corresponding to a current output position of companion audio content. As the audio content is presented (i.e., as it “plays back”), the highlight and/or visual cue may be advanced to maintain synchronization between the output position within the audio content and a corresponding position in the physical textual content.Type: GrantFiled: February 5, 2013Date of Patent: October 18, 2016Assignee: AUDIBLE, INC.Inventors: Douglas Cho Hwang, Guy Ashley Story, Jr.