Electrical Component Included In Teaching Means Patents (Class 434/169)
  • Patent number: 11972209
    Abstract: Systems and methods for the multifaceted analysis of a written work, such as an educational program admission essay, using machine learning, deep learning and natural language processing. Language, relevance, structure, and flows are evaluated for an overall impactful essay. Essay content is checked to evaluate whether the author has covered an essay's essential aspects. The essay is also analyzed for an effective structure for presenting details as per the essay type. The disclosure includes data preparation for the task, process of data tagging, feature engineering from the essay text, method for transfer learning and fine-tuning language model to adapt to the context of an essay. Finally, a process for building machine learning and deep learning models and technique for ensembling to use both models in combination is disclosed. The system may provide user-adapted feedback based on a persona created from the user profile.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: April 30, 2024
    Inventors: Himanshu Maurya, Atul Verma, Ashish Shriram Tulsankar, Ashish Fernando
  • Patent number: 11961413
    Abstract: The present invention relates to a method, system and non-transitory computer-readable recording medium for assisting listening study. According to one aspect of the invention, there is provided a method for assisting listening study, comprising the steps of: determining a plurality of weak study sentences that a user who is provided with assessment study sentences by speech fails to understand, with reference to feedback from the user; determining at least some of pronunciations or pronunciation combinations commonly included in the plurality of weak study sentences as weak pronunciations or pronunciation combinations that the user fails to understand; and determining a compensatory study course to be provided to the user, with reference to the determined weak pronunciations or pronunciation combinations.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: April 16, 2024
    Assignee: VITRUV INC.
    Inventors: Se Hoon Gihm, Myung Hoon Ahn, Tae Hyoung Oh, Du Seop Jung
  • Patent number: 11527174
    Abstract: The present invention provides a system for determining a language proficiency of a user in an evaluated language. A machine learning engine may be trained using audio file variables from a plurality of audio files and human generated scores for a comprehensibility, accentedness and intelligibility for each audio file. The system may receive an audio file from a user and determine a plurality of audio file variables from the audio file. The system may apply the audio file variables to the machine learning engine to determine a comprehensibility, an accentedness and an intelligibility score for the user. The system may determine one or more projects and/or classes for the user based on the user's comprehensibility score, accentedness score and/or intelligibility score.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: December 13, 2022
    Assignee: PEARSON EDUCATION, INC.
    Inventor: Masanori Suzuki
  • Patent number: 11496544
    Abstract: Systems and methods for generating and managing stories and sub-stories presented to a user's client device are described. In one example embodiment, a server system communicates a portion of a first story to a first client device based on a first client device association with a user segment assigned to the first story. The server system receives a first selection communication associated with a first piece of content of the first story, accesses a second story based on the selection, and communicates a portion of the second story to the first client device.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: November 8, 2022
    Assignee: Snap Inc.
    Inventors: Maria Pavlovskaia, Evan Spiegel
  • Patent number: 11386803
    Abstract: A cognitive training method having a processor to store exercise category data which has a number of task data, each of the task data including levels of difficulty. There is included a visual display and an audio transducer and a user interfaces actuated to accept user data in response to a selected task. The user data is sent from the user interface to the processor and the processor stores and evaluates the user input data and based upon evaluation of the user input data, adjusts a subsequent level of difficulty associated with a selected task. If user input data is above a correct data threshold, the level of difficulty may be adjusted responsive to an evaluation of the user input data.
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: July 12, 2022
    Inventor: Sylvain Jean-Pierre Daniel Moreno
  • Patent number: 11343582
    Abstract: Systems and methods are described for providing subtitles based on a user's language proficiency. An illustrative method includes receiving a request to display subtitles, selecting a language for the subtitles, determining, from a user profile, a user's proficiency level in the selected language, selecting, based on the user's proficiency level in the selected language, a set of subtitles from a plurality of sets of subtitles in the selected language, wherein each respective set of subtitles corresponds to a different proficiency level in the selected language, and generating for display the selected set of subtitles.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: May 24, 2022
    Assignee: Rovi Guides, Inc.
    Inventors: Susanto Sen, Amit Roy Choudhary
  • Patent number: 11282515
    Abstract: Systems, methods, and devices of a voice-directed inspection system that supports multiple inspectors in the inspection of business assets are described. Inspection plans for large and complex business assets can involve several steps. It is advantageous to split large inspection plans into non-overlapping sections to allow multiple inspectors to perform concurrent inspections. Such sectionalizing is also useful in training new inspectors.
    Type: Grant
    Filed: August 31, 2015
    Date of Patent: March 22, 2022
    Assignee: HAND HELD PRODUCTS, INC.
    Inventors: Matthew Nichols, Alexander Nikolaus Mracna, Kurt Charles Miller, Russell Evans, Mark Koenig, Navaneetha Myaka, Bernard Kriley, Luke Sadecky, Brian L. Manuel, Lauren Meyer
  • Patent number: 11182030
    Abstract: A children's toy with capacitive touch interactivity. The children's toy generally includes a user input overlay panel and one or more capacitive touch sensors. The overlay panel may be formed from a capacitive touch conductive natural organic material such as wood. The toy can be shaped and ornamented to resemble a musical instrument, and configured to play music in response to user input applied to the user input overlay panel and sensed by the capacitive touch sensors.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: November 23, 2021
    Assignee: KIDS II HAPE JOINT VENTURE LIMITED
    Inventors: Bradford Reese, Henrik Johansson, Adam Shillito, Tsz Kin Ho, Neil Ni, Chun Chung Yeung, Qi He
  • Patent number: 11153426
    Abstract: A device and method for responding to a user voice including an inquiry by outputting a response to the user's voice through a speaker and providing a guide screen including a response to the user's voice.
    Type: Grant
    Filed: April 15, 2020
    Date of Patent: October 19, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Changhwan Choi, Beomseok Lee, Seoyoung Jo
  • Patent number: 11138896
    Abstract: According to one embodiment, there is provided an information display apparatus including a processor, the processor being configured to: designate at least one keyword in a text displayed in a display unit in accordance with a user operation; cause the display unit to display an image associated with the designated keyword register the designated keyword and information of the image in a memory as data in which the designated keyword and the information of the image are associated with each other; output a problem based on the registered keyword and the image corresponding to the registered information.
    Type: Grant
    Filed: March 19, 2018
    Date of Patent: October 5, 2021
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Takashi Kojo
  • Patent number: 11114113
    Abstract: The present disclosure provides a system for predicting a disease state based on speech occurrences. A feature extraction module extracts a plurality of lingual features from a speech record of the speech occurrence. The lingual features are chosen based on a correlation between the lingual features and the disease state in at least a first language and a second language. The lingual features are consistent for transcripts in at least the first language and the second language. A prediction module including a trained classification model generates a prediction of the disease state for speech occurrences in at least the first language and the second language using the lingual features extracted from the speech records.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: September 7, 2021
    Assignee: LangAware, Inc.
    Inventors: Vasiliki Rentoumi, Georgios Paliouras
  • Patent number: 11030482
    Abstract: An annotation device, comprising: a display that performs sequential playback display of a plurality of images that may contain physical objects that are the subject of annotation, and a processor that acquires specific portions that have been designated within the images displayed on the display as annotation information, sets operation time or data amount for designating the specific portions, and at a point in time where designation of the specific portions has been completed for the operation time, a time based on data amount, or data amount, that have been set, requests learning to an inference engine that creates an inference model by learning, using annotation information that has been acquired up to the time of completion as training data representing a relationship between the physical object and the specific portions.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: June 8, 2021
    Assignee: OM DIGITAL SOLUTIONS
    Inventors: Toshikazu Hayashi, Zhen Li, Hisayuki Harada, Seiichiro Sakaguchi, Kazuhiko Osa, Osamu Nonaka
  • Patent number: 11004462
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing aphasia assessment. One of the methods includes receiving a recording, generating a text transcript of the recording, and generating speech quantifying and comprehension scores which can be used to determine an aphasia classification. Another method includes performing an aphasia assessment on a brain image to obtain an aphasia classification.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: May 11, 2021
    Assignee: Omniscient Neurotechnology Pty Limited
    Inventors: Michael Edward Sughrue, Stephane Philippe Doyen, Peter James Nicholas
  • Patent number: 10997367
    Abstract: In an embodiment, Applicant's method can automatically determine proficiency in a given language by tracking a user's gaze during reading a sample text. The language proficiency test includes reading sentences in a language (e.g., a language to the user's native language). The user's, or learner's, gaze is recorded using an eye-tracking camera while they read the sample text. Applicant's method and corresponding system predicts the language proficiency of the learner based on their gaze patterns. Applicant's method and corresponding system can also predict performance on specific standardized language proficiency tests such as Michigan EPT (Michigan English Proficiency Test), TOEIC® (Test of English for International Communication®), and TOEFL® (Test of English as a Foreign Language®).
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: May 4, 2021
    Assignee: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Yevgeni Berzak, Boris Katz, Roger Levy
  • Patent number: 10971171
    Abstract: Arrangements involving portable devices (e.g., smartphones and tablet computers) are disclosed. One arrangement enables a content creator to select software with which that creator's content should be rendered—assuring continuity between artistic intention and delivery. Another utilizes a device camera to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some technologies concern improvements to the user interfaces associated with such devices. For example, some arrangements enable discovery of both audio and visual content, without any user requirement to switch modes. Other technologies involve use of these devices in connection with shopping, text entry, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern computational photography.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: April 6, 2021
    Assignee: Digimarc Corporation
    Inventors: Bruce L. Davis, Edward B. Knudson, Geoffrey B. Rhoads, Tony F. Rodriguez, Colin P. Cornaby, Emma C. Sinclair, Eliot Rogers
  • Patent number: 10964222
    Abstract: A tool for cognitive assimilation and situational recognition training includes one or more modules for visualization and visual association, auditory association, textual association, subliminal imprinting, constructive repetition, summarization, and testing of content provided to learners. The tool enables material provided as training content to be visualized by a user, and applies subliminal imprinting techniques to reinforce the visualized training content. The visualized and subliminally imprinted training content is further reinforced with visual, auditory and textual associations to the training content, and by constructive repetition of training content for the user, together with the subliminal messaging and associations. The tool also provides material in a building block approach so that previously-learned material is additionally reinforced in subsequent introductions of additional content.
    Type: Grant
    Filed: January 15, 2018
    Date of Patent: March 30, 2021
    Inventor: Michael J. Laverty
  • Patent number: 10930167
    Abstract: An auditory projective test is provided that emphasizes a shift in focus from visual and verbal/linguistic stimuli to an examination of the phenomena of acoustic and sonic association. The design discovers a “canon” of sound stimuli that may provide psychological associations with the aim to further inform and compliment the findings of Jung's word association test. The design includes a computer software program that gathers and calculates data in Excel format. Jung's traditional Word Association test is presented alongside the sound association test. The design may include the use of digital video recording to help observe and demonstrate behavioral responses. Additionally, the design may include the addition of a digital interface that will reintroduce the measurement of certain physiological data originally used in Jung's association experiments.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: February 23, 2021
    Inventor: Jesse L. Upchurch, Jr.
  • Patent number: 10915819
    Abstract: A method is disclosed including presenting a concept to a user via one or more presentation devices and monitoring the user's response to the presentation of the concept by a sensing device. The sensing device may generate sensor data based on the monitored user's response. The method further includes determining based on the sensor data generated by the sensor that the user requires clarification of the presented concept. In response to determining that the user requires clarification of the presented concept, the method further includes identifying an analogy that is configured to clarify the presented concept and presenting the identified analogy to the user via one or more of the presentation devices.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Clifford A. Pickover, Robert J. Schloss, Komminist S. Weldemariam, Lin Zhou
  • Patent number: 10909986
    Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: obtaining an input text for an output speech. The number of words and syllables are counted in each sentence, and a mean sentence length of the input text is calculated. Each sentence length is checked against the mean sentence length and a variation for each sentence is calculated. For the input text, the consumability-readability score is produced as an average of variations for all sentences in the input text. The consumability-readability score indicates the level of satisfaction for the listener of the output speech based on the input text.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: February 2, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Craig M. Trim, John M. Ganci, Jr., Anna Chaney, Stefan Van Der Stockt
  • Patent number: 10896765
    Abstract: A mathematical model may be trained to diagnose a medical condition of a person by processing acoustic features and language features of speech of the person. The performance of the mathematical model may be improved by appropriately selecting the features to be used with the mathematical model. Features may be selected by computing a feature selection score for each acoustic feature and each language feature, and then selecting features using the scores, such as by selecting features with the highest scores. In some implementations, stability determinations may be computed for each feature and features may be selected using both the feature selection scores and the stability determinations. A mathematical model may then be trained using the selected features and deployed. In some implementations, prompts may be selected using computed prompt selection scores, and the deployed mathematical model may be used with the selected prompts.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 19, 2021
    Assignee: CANARY SPEECH, LLC
    Inventors: Jangwon Kim, Namhee Kwon, Henry O'Connell, Phillip Walstad, Kevin Shengbin Yang
  • Patent number: 10843074
    Abstract: A game apparatus includes an LCD, and a touch panel is provided in relation to the LCD. The LCD displays a game screen, and the player performs touch operations (sliding, click, etc.) on the touch panel with use of a stick to draw, correct and decide at random a moving path of an object. When the movement path of the object is decided and some point on the movement path is clicked, the object moves to the clicked position according to the movement path.
    Type: Grant
    Filed: March 6, 2006
    Date of Patent: November 24, 2020
    Assignee: NINTENDO CO., LTD.
    Inventor: Takanori Hino
  • Patent number: 10839710
    Abstract: The present invention is directed to systems and related methods of teaching pre-keyboarding and keyboarding on a QWERTY-style keyboard wherein a color-coded row-based metaphorical and visual cuing system is used in a curriculum to make foundational keyboarding skills easy-to-teach and easy-to-learn, including: unilateral hand/finger skills, Home Row hand/finger positions, relational position of symbol location, and the essential keystroke spectrum of Home Row positioning-based finger movements of the left and right hand. The invention provides a dynamic virtual keyboard with colored rows which hexfurcates the QWERTY layout into left- and right-handed row sections, independently toggling the visibility of each in a developmental order to teach keyboarding skills in incremental steps rather than all at once. The invention further provides a dynamic cursor that uses visual indicators that mirror the visual cuing system to reinforce instruction with the dynamic virtual keyboard and aid keyboarding accuracy.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: November 17, 2020
    Assignee: NO TEARS LEARNING, INC.
    Inventors: Janice Z. Olsen, Emily Knapton, Eric Olsen, Robert Walnock, Hank Isaac, Ralph Sklarew
  • Patent number: 10795944
    Abstract: User intent may be derived from a previous communication. For example, a text string for user input may be obtained. The text string may include a pronoun. Information from a communication received prior to receipt of the user input may be derived. The information may identify an individual. User intent may be derived from the text string and the information. This may include determining that the pronoun refers to the individual.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: October 6, 2020
    Assignee: Verint Americas Inc.
    Inventors: Fred Brown, Mark Zartler, Tanya M. Miller
  • Patent number: 10743079
    Abstract: Systems and methods are described for providing subtitles based on a user's language proficiency. An illustrative method includes receiving a request to display subtitles, selecting a language for the subtitles, determining, from a user profile, a user's proficiency level in the selected language, selecting, based on the user's proficiency level in the selected language, a set of subtitles from a plurality of sets of subtitles in the selected language, wherein each respective set of subtitles corresponds to a different proficiency level in the selected language, and generating for display the selected set of subtitles.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: August 11, 2020
    Assignee: Rovi Guides, Inc.
    Inventors: Susanto Sen, Amit Roy Choudhary
  • Patent number: 10706732
    Abstract: A system includes a brain activity sensor sensing electrical activity of a students' brain and a device that receives messages from the brain activity sensor while the student is receiving instructions with a first value for an attribute of instruction and that determines a first attention level from the received messages. The device receives additional messages from the brain activity sensor while the student is receiving instructions with a second value for the attribute of instruction and determines a second attention level from the additional received messages. The device then determines an attention variability for the attribute of instruction based on a change in attention level from the first attention level to the second attention level.
    Type: Grant
    Filed: June 16, 2017
    Date of Patent: July 7, 2020
    Assignee: Nervanix, LLC
    Inventor: Adam Leonard Hall
  • Patent number: 10706741
    Abstract: A language learning system that teaches an individualized set of vocabulary words to users through an interactive story. The interactive story is modeled through probabilistic rules in a semantic network having objects and relations. Dialog and narration is generated dynamically based on the state of the interactive story model using phrasal rewrite rules evaluated with a four-valued logic system in which truth values of the objects and relations are encoded as true, false, defined, and undefined in parallel memory structures.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: July 7, 2020
    Inventor: Roger Midmore
  • Patent number: 10695663
    Abstract: Systems, apparatus and methods may provide for audio processing of received user audio input from a microphone that may optionally be a tissue conducting microphone. Audio processing may be further conducted on received ambient audio from one or more additional microphones. A translator may translate the ambient audio into content to be output to a user. In an embodiment, ambient audio is translated into visual content to be displayed on a virtual reality device.
    Type: Grant
    Filed: December 22, 2015
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventors: Shamim Begum, Kofi C. Whitney
  • Patent number: 10657835
    Abstract: A method for completing a project using a content-generating device. The method includes receiving a task defining a content item to be generated, restricting operation of at least a first component of the content-generating device, operating at least a second component of the content-generating device to generate the content item, and making available the generated content item.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: May 19, 2020
    Inventors: Chantal Jandard, Matthew Campbell, Dana Marr, Rylan Cottrell, Joseph Wong, Claudia Peralta, Josh Cheung
  • Patent number: 10614703
    Abstract: A system for activating a plurality of Halloween props with a single remote controller is provided. The system includes a remote controller and a plurality of remote receivers. Each of the remote receivers is connected to the Halloween prop via an activation port of the Halloween prop, e.g., a try-me or step pad port. The remote controller has a plurality of pushbuttons for accepting a user selection to be received by a respective remote receiver. Each of the pushbuttons corresponds to a Halloween prop that is connected with the respective remote receiver via the activation port.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: April 7, 2020
    Assignee: Spencer Gifts LLC
    Inventor: Carl Joseph Franke
  • Patent number: 10489507
    Abstract: In one embodiment, a method includes identifying a plurality of dyslexic users on an online social network. The plurality of dyslexic users may be identified based on content objects posted by these users over a particular time period, where the content objects may include one or more of word-level errors or sentence-level errors. A machine-learning model may be trained for text correction using a corpus of social network data, which may include at least the content objects with one or more of word-level errors or sentence-level errors, and a corresponding set of corrected content objects. A text string including one or more errors may be received from a client system associated with a first user. The text string may be transformed into a vector representation using an encoder of the machine-learning model. A corrected text string may be generated from the vector representation using a decoder of the machine-learning model.
    Type: Grant
    Filed: January 2, 2018
    Date of Patent: November 26, 2019
    Assignee: Facebook, Inc.
    Inventors: Xian Li, Irina-Elena Veliche, Debnil Sur, Shaomei Wu, Amit Bahl, Juan Miguel Pino
  • Patent number: 10438509
    Abstract: A systems, methods, and products for language learning software that automatically extracts text from corpora using various natural-language processing product features, which can be combined with custom-designed learning activities to offer a needs-based, adaptive learning methodology. The system may receive a text, extract keywords pedagogically valuable to non-native language learning, assign a difficulty score to the text using various linguistic attributes of the text, generate a list of potential distractors for each keyword in the text to implement in learning activities, and topically tag the text against a taxonomy based on the content. This output is then used in conjunction with a series of learning activity-types designed to meet learners' language skill needs and then used to create dynamic, adaptive activities for them.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: October 8, 2019
    Assignee: Voxy, Inc.
    Inventors: Katharine Nielson, Kasey Kirkham, Na'im Tyson, Andrew Breen
  • Patent number: 10417933
    Abstract: Techniques are provided for selectively and dynamically determining one or more words of an electronic book to present with comprehension guides. For instance, an electronic device rendering an electronic book may determine whether to display some, all, or no words of the book with comprehension guides for words within the electronic book based on word difficulty, contextual importance or aspects of the user. Techniques are also provided for determining the content of comprehension guides to be presented with the words.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: September 17, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Wainwright Gregory Siady Yu, Joon Hao Chuah, Gregory Nicholas Hullender, James Joseph Poulin, Mohammad Kanso, Manigandan Natarajan, Brandon LaBranche Watson, Robert Wayne Roth, Joseph King, Nikunj Aggarwal, Ramya Dass, Sridhar Sampath, Santosh Kumar Asokan
  • Patent number: 10410539
    Abstract: Disclosed are systems, methods, and products for language learning that automatically extracts keywords from resources using various natural-language processing product features, which can be combined with custom-designed learning activities to offer a needs-based, adaptive learning methodology. The system may receive resources having text and then determine a text difficulty score that predicts how difficult the resource is for language learners based on any number of factors, including any number of semantic and syntactic features of the text. Training resources labeled with metadata may be used to train a statistical model for determining difficulty scores of newly received text. Resources may be grouped based on difficulty score, and groups of resources may correspond to language learners' proficiency levels.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: September 10, 2019
    Assignee: Voxy, Inc.
    Inventors: Katharine Nielson, Kasey Kirkham, Na'im Tyson, Andrew Breen
  • Patent number: 10319250
    Abstract: Speech synthesis chooses pronunciations of words with multiple acceptable pronunciations based on an indication of a personal, class-based, or global preference or an intended non-preferred pronunciation. A speaker's words can be parroted back on personal devices using preferred pronunciations for accent training. Degrees of pronunciation error are computed and indicated to the user in a visual transcription or audibly as word emphasis in parroted speech. Systems can use sets of phonemes extended beyond those generally recognized for a language. Speakers are classified in order to choose specific phonetic dictionaries or adapt global ones. User profiles maintain lists of which pronunciations are preferred among ones acceptable for words with multiple recognized pronunciations. Systems use multiple correlations of word preferences across users to predict use preferences of unlisted words.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: June 11, 2019
    Assignee: SOUNDHOUND, INC.
    Inventors: Kiran Garaga Lokeswarappa, Jonah Probell
  • Patent number: 10255822
    Abstract: A computer program product is provided for improving comprehension in rapid serial visual presentation. The product includes a non-transitory computer readable storage medium having program instructions embodied therewith executable by a computer to cause the computer to perform a method. The method includes determining, by a cognitive load estimator, a cognitive load of a plurality of words included in a rapid serial visual presentation by using at least one metric. The cognitive load is determined on any of a word level and a word sequence level. The method includes calculating, by a word presentation rate calculator, a variable presentation rate for the words based on the cognitive load. The method includes controlling, by a rate controller, a displaying of the words on a display in accordance with the calculated variable presentation rate. The rate controller temporarily reduces the variable presentation rate responsive to the cognitive load being above a threshold.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: April 9, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew S. Aaron, Ellen E. Kislal, Jonathan Lenchner
  • Patent number: 10242595
    Abstract: A system for operating a visually impaired accessible signage, the system comprising: an accessibility tag configured to attach to a signage or to a surface near the signage, store communication information, communicate with a mobile device and prompt display of information by the mobile device, wherein the information is gained from an information file, and a database configured to store at least one information file. Additional embodiments of the system and methods for use thereof are described herein.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: March 26, 2019
    Inventors: Reuven Maman, Israel Maman
  • Patent number: 10243289
    Abstract: A communication module includes a plug connector provided with upper connection pins and lower connection pins. The upper connection pins and the lower connection pins include signal pins each connected to a signal line arranged in a module board, and ground pins each connected to a ground line arranged in the module board. An opposing interval between a terminal end of a rear end portion of each of the ground pins included in the upper connection pins, and a terminal end of a rear end portion of each of the ground pins included in the lower connection pins is longer than an opposing interval between a terminal end of a rear end portion of each of the signal pins included in the upper connection pins and a terminal end of a rear end portion of each of the signal pins included in the lower connection pins.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: March 26, 2019
    Assignee: Hitachi Metals, Ltd.
    Inventor: Yoshinori Sunaga
  • Patent number: 10223934
    Abstract: In some embodiments, a method that includes capturing sound in a natural language environment using at least one sound capture device that is located in the natural language environment. The method also can include analyzing a sound signal from the sound captured by the at least one sound capture device to determine at least one characteristic of the sound signal. The method additionally can include reporting metrics that quantify the at least one characteristic of the sound signal. The metrics of the at least one characteristic can include a quantity of words spoken by one or more first persons in the natural language environment. Other embodiments are provided.
    Type: Grant
    Filed: May 30, 2016
    Date of Patent: March 5, 2019
    Assignee: Lena Foundation
    Inventor: Terrance D. Paul
  • Patent number: 10216825
    Abstract: A user device displays portions of an electronic publication for a user to read. The user device tracks the user's reading behavior of the portions of the electronic publication. The user device then suggests additional reading material for the user based on the user's reading behavior.
    Type: Grant
    Filed: July 3, 2014
    Date of Patent: February 26, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: George M. Ionkov, Dennis H. Harding, Aaron James Dykstra, Laura Ellen Grit, James C. Petts, Samuel A. Minter, Lindsey Christina Fowler, Yong Xi
  • Patent number: 10182758
    Abstract: A measuring device, including a device body, distance measuring units, optical measuring units and a processing unit, is provided. The device body includes a sensing reference surface adapted for a person to be measured to stand thereupon. The optical measuring units are disposed corresponding to the respective distance measuring units. Each of the distance measuring units transmits distance measuring signals to body areas of the person, so as to obtain distance information between each of the distance measuring units and the body areas of the person. Each of the optical measuring signal transmits measuring light to the person to be measured, and receives a measuring pattern formed through reflection of the measuring light from the person. The processing unit calculates and reconstructs a three-dimensional surface structure of the body areas of the person according to the respective distance information obtained from the distance measuring units and the corresponding measuring patterns.
    Type: Grant
    Filed: October 5, 2015
    Date of Patent: January 22, 2019
    Assignee: HTC Corporation
    Inventors: Chun-Yih Wu, Ta-Chun Pu, Yen-Liang Kuo
  • Patent number: 10152988
    Abstract: A mathematical model may be trained to diagnose a medical condition of a person by processing acoustic features and language features of speech of the person. The performance of the mathematical model may be improved by appropriately selecting the features to be used with the mathematical model. Features may be selected by computing a feature selection score for each acoustic feature and each language feature, and then selecting features using the scores, such as by selecting features with the highest scores. In some implementations, stability determinations may be computed for each feature and features may be selected using both the feature selection scores and the stability determinations. A mathematical model may then be trained using the selected features and deployed. In some implementations, prompts may be selected using computed prompt selection scores, and the deployed mathematical model may be used with the selected prompts.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: December 11, 2018
    Assignee: CANARY SPEECH, LLC
    Inventors: Jangwon Kim, Namhee Kwon, Henry O'Connell, Phillip Walstad, Kevin Shengbin Yang
  • Patent number: 10147414
    Abstract: A machine has a processor and a memory connected to the processor. The memory stores instructions executed by the processor to supply a name page in response to a request from an administrator machine. Name page updates are received from the administrator machine. The name page updates include participants and associated network contact information for the participants. A code is utilized to form a link to the name page. Prompts for textual name information and audio name information are supplied to a client machine that activates the link to the name page. Textual name information and audio name information are received from the client machine. The textual name information and audio name information are stored in association with the name page. Navigation tools are supplied to facilitate access to the textual name information and audio name information.
    Type: Grant
    Filed: May 15, 2015
    Date of Patent: December 4, 2018
    Assignee: Namecoach, Inc
    Inventor: Praveen Shanbhag
  • Patent number: 10147336
    Abstract: Disclosed are systems, methods, and products for language learning software that automatically extracts text from resources using various natural-language processing features, which can be combined with custom-designed learning activities to offer a needs-based, adaptive learning methodology. The system may receive a text, extract keywords pedagogically valuable to non-native language learning, assign a difficulty score to the text using various linguistic attributes of the text, generate a list of potential distractors for each keyword related to a resource to implement in learning activities. Distractors may be of various types, which are dynamically selected from a distractor store depending on a learning activity chosen to meet a learner's needs. Distractors may vary in difficulty, and may be dynamically selected based on a learner's overall proficiency or based on a learner's abilities in specific language skills.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: December 4, 2018
    Assignee: Voxy, Inc.
    Inventors: Katharine Nielson, Kasey Kirkham, Na'im Tyson, Andrew Breen
  • Patent number: 10135949
    Abstract: Systems and methods for generating and managing stories and sub-stories presented to a user's client device are described. In one example embodiment, a server system communicates a portion of a first story to a first client device based on a first client device association with a user segment assigned to the first story. The server system receives a first selection communication associated with a first piece of content of the first story, accesses a second story based on the selection, and communicates a portion of the second story to the first client device.
    Type: Grant
    Filed: May 5, 2015
    Date of Patent: November 20, 2018
    Assignee: Snap Inc.
    Inventors: Maria Pavlovskaia, Evan Spiegel
  • Patent number: 10129198
    Abstract: A method may include receiving, by a computing device associated with a user, a message from an origination source and receiving, by the computing device, an audio input. The method may also include determining, by the computing device and based at least in part on the audio input and contextual information, a probability that the user intends to send a response message to the origination source. The method may further include, responsive to determining that the probability the user intends to send the response message to the origination source satisfies a threshold probability, determining, by the computing device, that the user intends to send the response message to the origination source. The method may also include, responsive to determining that the user intends to send the response message to the origination source, generating, by the computing device and based on the audio input, the response message, and sending, by the computing device, the response message to the origination source.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: November 13, 2018
    Assignee: Google LLC
    Inventor: Evan Nicklas Wu Malahy
  • Patent number: 10095775
    Abstract: Embodiments of the present invention disclose a method, a computer program product, and a computer system for identifying information gaps in corpora. A computer receives a document and extracts keywords from the document while filtering trivial keywords. The computer identifies and extracts top keywords detailed by the document using a topic modelling approach before determining whether the extracted top keywords exceed a threshold use frequency. Based on determining that the top keywords exceed a threshold use frequency, determining whether the top keywords have a relation to other entities within the document and, if so, determining whether the top keywords are defined within the document. Based on determining that the top keywords are not defined in the document, adding the top keywords to a list and defining the top keywords.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: October 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Brendan C. Bull, Scott R. Carrier, Aysu Ezen Can, Dwi Sianto Mansjur
  • Patent number: 9997085
    Abstract: Computer-implemented methods are provided. A method includes determining, by a processor, a cognitive load of a plurality of words included in a rapid serial visual presentation by using at least one metric. The cognitive load is determined on any of a word level and a word sequence level. The method further includes calculating, by the processor, a variable presentation rate for the plurality of words based on the cognitive load. The method also includes controlling, by the processor, a displaying of the plurality of words on a display device in accordance with the calculated variable presentation rate based on a threshold by temporarily reducing the variable presentation rate responsive to the cognitive load being above the threshold.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: June 12, 2018
    Assignee: International Business Machines Corporation
    Inventors: Andrew S. Aaron, Ellen E. Kislal, Jonathan Lenchner
  • Patent number: 9911355
    Abstract: Methods and a system are provided. A method includes receiving a plurality of words comprised in a Rapid Serial Visual Presentation. The method further includes determining a cognitive load of the plurality of words by using at least one metric. The cognitive load is determined on any of a word level and a word sequence level. The method also includes calculating a variable presentation rate for the plurality of words based on the cognitive load. The variable presentation rate is capable of being varied on any of the word level and the word sequence level. The method additionally includes controlling a displaying of the plurality of words on a display device in accordance with the calculated variable presentation rate.
    Type: Grant
    Filed: June 18, 2015
    Date of Patent: March 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Andrew S. Aaron, Ellen E. Kislal, Jonathan Lenchner
  • Patent number: 9886870
    Abstract: Methods and a system are provided. A method includes receiving a plurality of words comprised in a Rapid Serial Visual Presentation. The method further includes determining a cognitive load of the plurality of words by using at least one metric. The cognitive load is determined on any of a word level and a word sequence level. The method also includes calculating a variable presentation rate for the plurality of words based on the cognitive load. The variable presentation rate is capable of being varied on any of the word level and the word sequence level. The method additionally includes controlling a displaying of the plurality of words on a display device in accordance with the calculated variable presentation rate.
    Type: Grant
    Filed: November 5, 2014
    Date of Patent: February 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Andrew S. Aaron, Ellen E. Kislal, Jonathan Lenchner
  • Patent number: 9875669
    Abstract: Disclosed are systems, methods, and products for language learning software that automatically extracts text from resources using various natural-language processing features, which can be combined with custom-designed learning activities to offer a needs-based, adaptive learning methodology. The system may receive a text, extract keywords pedagogically valuable to non-native language learning, assign a difficulty score to the text using various linguistic attributes of the text, generate a list of potential distractors for each keyword related to a resource to implement in learning activities. Distractors may be of various types, which are dynamically selected from a distractor store depending on a learning activity chosen to meet a learner's needs. Distractors may vary in difficulty, and may be dynamically selected based on a learner's overall proficiency or based on a learner's abilities in specific language skills.
    Type: Grant
    Filed: February 14, 2014
    Date of Patent: January 23, 2018
    Assignee: VOXY, Inc.
    Inventors: Katharine Nielson, Kasey Kirkham, Na'im Tyson, Andrew Breen