Foreign Patents (Class 434/157)
-
Patent number: 12131660Abstract: The computer system displays a user interface that represents text information by an improved method of phonics-based approach for teaching reading, especially for those with dyslexia or other neurological disorders. The interface enlarges a line of text to be read by a user and places the cursor/pointer under the first word in the line of text in a direction of reading on the computer screen. The user interacts with the interface by a cursor/pointer. The system calculates the cursor/pointer position and highlights font and background of the traversed part of the text line/element under which the cursor is located. The elements can be letters, syllables and words. In the voiceover mode, the pointer/cursor moves automatically and the user follows the cursor/pointer at the speed suggested by the system and the system voices the word/syllable being read. In the non-voiceover mode, the user drags the cursor along the text line.Type: GrantFiled: November 7, 2022Date of Patent: October 29, 2024Assignee: ROCKIDS COMPANYInventors: Olga Shemyakina, Vasilii Baev
-
Patent number: 12073733Abstract: A method and system to identify first content in a first language. Second content in a second language is identified that matches the first content in the first language. A machine learning process is employed to map a set of similarities between the first content in the first language and the second content in the second language. Based on the set of similarities and a schema, modified content including a ratio of a first portion of the first content in the first language to a second portion of the second content in the second language is generated. The modified content to a user system and one or more inputs associated with the modified content are received from the user system. Based at least in part on the one or more inputs, one or more exercises associated with the modified content are generated.Type: GrantFiled: August 25, 2022Date of Patent: August 27, 2024Assignee: Transcendent International, LLCInventor: William Z. Tan
-
Patent number: 11928952Abstract: A safety system for supervision of a child involving interaction between a pre-trained first adult conveying responsibility for supervision of the child to a pre-trained second adult. The system utilizes a specific interactive scripted conversation between the first adult and the second adult to assign responsibility for supervision of the child to the second adult. Following this, the first adult transfers a wearable to the second adult to provide a visual and tactile transfer of the responsibility of supervision of the child from the first adult and acceptance of the responsibility of supervision of the child by the second adult corresponding to the interactive conversation between the first adult and the second adult.Type: GrantFiled: September 28, 2021Date of Patent: March 12, 2024Inventors: Matthew Tyler Dunn, Caitlin Elizabeth Dunn
-
Patent number: 11908446Abstract: The wearable audio-visual translation system is a device that includes a wearable camera and audio system, that can take photos of a signboard using a wearable camera, send the images wirelessly to the user's smart phone for translation, and send the translation back to the user in the form of an audio signal in very less time. To accomplish this, the device is mounted on to eyewear, such that the camera system can capture visual signs instantaneously as the user is looking at them. Further, the device comprises associated electrical and electronic circuitry mounted onto the same eyewear that enables streaming of the photos taken by the camera system into a wirelessly connected smartphone. The smartphone performs image processing and recognition on the images with the help of a translator software application, and the translated signs are synthesized to audio signals and played out in an audio device.Type: GrantFiled: October 5, 2023Date of Patent: February 20, 2024Inventor: Eunice Jia Min Yong
-
Patent number: 11830500Abstract: A system and method for the management of multiple digital assistants enabling the collection and analysis of voice commands and the selective instruction of the digital assistant having the most appropriate capability and/or connectivity to respond to the analyzed voice command. In a preferred embodiment the digital assistants are networked via a DAC. The controller serves to as a central node programmed to recognize the voice commands received by networked digital assistants and determine which assistant or assistants are best qualified to respond. This determination is a function of the command type, as well as the location and capabilities of each digital assistant. In addition, the system and method enhance the utility of the digital assistants, enabling a voice command received by a particular digital assistant to be acted upon by one or more digital assistants located outside of the audible range of the spoken voice command.Type: GrantFiled: March 30, 2021Date of Patent: November 28, 2023Assignee: ARRIS Enterprises LLCInventors: Christopher S. Del Sordo, Charles R. Hardt, Albert Fitzgerald Elcock
-
Patent number: 11823592Abstract: The disclosed system and method focus on automatically generating questions from input of written text and/or audio transcripts (e.g., learning materials) to aid in teaching people through testing their knowledge about information they have previously been presented with. These questions may be presented to an end user via a conversational system (e.g., virtual agent or chatbot). The user can iterate through each question, provide feedback for the question, attempt to answer the question, and/or get an answer score for each answer. The disclosed system and method can generate questions tailored to a particular subject by using teaching materials as input. The disclosed system and method can further curate the questions based on various conditions to ensure that the questions are automatically selected and arranged in an order that best suits the subject taught and the learner answering the questions.Type: GrantFiled: August 31, 2021Date of Patent: November 21, 2023Assignee: Accenture Global Solutions LimitedInventors: Roshni Ramesh Ramnani, Shubhashis Sengupta, Saurabh Agrawal
-
Patent number: 11756443Abstract: Provided is a learning support system which can enhance a learning effect by sharing a matter of interest among learners viewing a learning content. The learning support system includes: a learning content display unit configured to play and display a teaching material video; a learner information reporting unit configured to acquire biological information of a learner during play of the teaching material video and report the biological information as learner information; a region-of-attention identification unit configured to identify a region-of-attention of the learner based on the biological information; a learner information display creation unit configured to generate a screen in which the region-of-attention is superimposed on the teaching material video of another learner belonging to the same cluster as the learner; and a learner information transmission unit configured to transmit the screen to the another learner.Type: GrantFiled: April 28, 2020Date of Patent: September 12, 2023Assignees: HITACHI, LTD., KYOTO UNIVERSITYInventors: Takashi Numata, Ryuji Mine
-
Patent number: 11741845Abstract: Exemplary embodiments described herein are directed to systems and methods of immersive language learning that employ positionable visual aids and an associated software application with which the visual aids can interact through scanning by the application. The visual aids may include a word to be learned. The word may be presented in one or more languages. The application is loaded onto a smart phone or other microprocessor-based device with imaging capability. Scanning of the visual aid using the application on the microprocessor-based device results in a presentation by the application that facilitates learning of the word in meaning and pronunciation.Type: GrantFiled: November 7, 2018Date of Patent: August 29, 2023Inventors: David Merwin, Carrie Chen, Susann Moeller, Xutao Shi
-
Patent number: 11545140Abstract: Systems and methods are provided for language-based service hailing. Such system may comprise one or more processors and a memory storing instructions that, when executed by the one or more processors, cause the computing system to obtain a plurality of speech samples, each speech sample comprising one or more words spoken in a language, train a neural network model with the speech samples to obtain a trained model for determining languages of speeches, obtain a voice input, identify at least one language corresponding to the voice based at least on applying the trained model to the voice input, and communicate a message in the identified language.Type: GrantFiled: July 31, 2017Date of Patent: January 3, 2023Assignee: Beijing DiDi Infinity Technology and Development Co., Ltd.Inventors: Fengmin Gong, Xiulin Li
-
Patent number: 11328113Abstract: A text string is identified that has an associated localized text string; For example, an English text string may have an associated Chinese localized text string. A unique color is associated with the text string. The text string is modified with the associated unique color. The text string with the associated unique color is displayed. A graphical image of the text string with the associated unique color is captured. The text string is localized based the associated unique color in the captured graphical image using the associated localized text string. In a second embodiment, modifying the text string with the unique color is based on an invisible character that is inserted into the text string.Type: GrantFiled: March 3, 2021Date of Patent: May 10, 2022Assignee: Micro Focus LLCInventors: Yi-Qun Ren, Kai Hu, Le Peng
-
Patent number: 11257293Abstract: The invention discloses an augmented reality method and device, and relates to the field of computer technology. A specific implementation of the method includes: acquiring video information of a target, and acquiring real image information and real sound information of the target from the same; using the real image information to determine at least one image-based target state data, and using the real sound information to determine at least one sound-based target state data; fusing the image-based and sound-based target state data of the same type to obtain a target portrait data; and acquiring virtual information corresponding to the target portrait data, and superimposing the virtual information on the video information. This implementation can identify the current state of the target based on the image information and sound information of the target, and fuse the two identification results to obtain an accurate target portrait.Type: GrantFiled: November 6, 2018Date of Patent: February 22, 2022Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., BEIJING JINGDONG CENTURY TRADING CO., LTD.Inventors: Weihua Zhang, Jiangxu Wu, Fan Li, Ganglin Peng, Hongguang Zhang, Leifeng Kong
-
Patent number: 11126799Abstract: The disclosed computer-implemented method may include accessing a string of text that includes characters written in a first language. The method may next include translating the text string into different languages using machine translation. The method may next include identifying, among the translated text strings, a shortest string and a longest string. The method may also include calculating a customized string length adjustment ratio for adjusting the length of the accessed text string based on the shortest translated string length and the longest translated string length. Furthermore, the method may include dynamically applying the calculated customized string length adjustment ratio to the accessed text string, so that the length of the accessed text string may be dynamically adjusted according to the customized string length adjustment ratio. The method may also include presenting the adjusted text string in the user interface.Type: GrantFiled: March 1, 2019Date of Patent: September 21, 2021Assignee: Netflix, Inc.Inventors: Tim Brandall, Shawn Xu
-
Patent number: 11056103Abstract: A real-time utterance verification system according to the present invention includes a speech recognition unit configured to recognize an utterance of an utterer, a memory configured to store a program for verifying the utterance of the utterer in real time, and a processor configured to execute the program stored in the memory, wherein, upon executing the program, the processor generates and stores a list of the utterance of the utterer, performs a semantic analysis on each utterance included in the list, and generates, when the utterance is determined to be an inappropriate utterance for a listener as a result of the semantic analysis, utterance restricting information corresponding to the inappropriate utterance.Type: GrantFiled: November 28, 2018Date of Patent: July 6, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Chang Hyun Kim, Young Kil Kim
-
Patent number: 11051096Abstract: A switchable headphone assembly for facilitating a user to clearly hear ambient sounds while wearing a pair of headphones includes a pair of headphones that is wearable on a user to emit audible sound into each of the user's ears. Each of the headphones is pivotable between an on position or an off position. A respective one the headphones emits audible sound when the respective headphone is in the on position. Moreover, a respective one of the headphones ceases emitting audible sound when the respective headphone is in the off position. In this way the pair of headphones facilitate the user to clearly hear ambient sounds.Type: GrantFiled: February 11, 2020Date of Patent: June 29, 2021Inventor: Ali Pournik
-
Patent number: 11030408Abstract: Disclosed herein is an NLP system that is able to extract meaning from a natural language message using improved parsing techniques. Such an NLP system can be used in concert with an NLG system to interactively interpret messages and generate response messages in an interactive conversational stream. The parsing can include (1) named entity recognition that contextualizes the meanings of words in a message with reference to a knowledge base of named entities understood by the NLP and NLG systems, (2) syntactically parsing the message to determine a grammatical hierarchy for the named entities within the message, (3) reduction of recognized named entities into aggregations of named entities using the determined grammatical hierarchy and reduction rules to further clarify the message's meaning, and (4) mapping the reduced aggregation of named entities to an intent or meaning, wherein this intent/meaning can be used as control instructions for an NLG process.Type: GrantFiled: February 15, 2019Date of Patent: June 8, 2021Assignee: NARRATIVE SCIENCE INC.Inventors: Maia Lewis Meza, Clayton Nicholas Norris, Michael Justin Smathers, Daniel Joseph Platt, Nathan D. Nichols
-
Patent number: 10803050Abstract: In one embodiment, a method includes accessing a number of records describing a number of entities generated based on data collected from a number of data sources, where the records are grouped by data source, deduping the number of records in each group, selecting a data source as a core source, identifying, for a record in the core group, a candidate set including records from the non-core groups of records that satisfy conditions to be in the candidate set for the record, generating a feature vector for each pair of records between a record in the core group and a record in the candidate set, computing a probability that the pair of records describe a common entity for each pair of records, and linking the record in the candidate set to a globally unique entity identifier identifying a unique entity if the probability exceeds a threshold.Type: GrantFiled: July 27, 2018Date of Patent: October 13, 2020Assignee: Facebook, Inc.Inventor: Markku Salkola
-
Patent number: 10672293Abstract: Natural language learning in context is provided by generating combined text of a user's native tongue and language to be learned. The combined text is generated based on elements of code-switching including syntax and semantics. Combining text based on elements of code-switching maximizes the learnability or the likelihood of retaining certain text of a foreign language.Type: GrantFiled: March 14, 2014Date of Patent: June 2, 2020Assignee: Cornell UniversityInventors: Igor Labutov, Hod Lipson
-
Patent number: 10594900Abstract: An image forming apparatus includes a reading section and an image forming section. The reading section reads a plurality of images formed on a document. The image forming section forms the images on a plurality of sheets. The images include a first image having a first color, and one or more second images having a second color differing from the first color. The sheets include a first sheet and a second sheet differing from the first sheet. The image forming section forms the first image on the first sheet and the second images on the second sheet.Type: GrantFiled: February 26, 2018Date of Patent: March 17, 2020Assignee: KYOCERA Document Solutions Inc.Inventors: Takuya Fukata, Koji Minakuchi
-
Patent number: 10546580Abstract: Methods, systems, and vehicle components for providing a corrected pronunciation suggestion to a user are disclosed. A method includes receiving, by a microphone communicatively coupled to a processing device, a voice input from the user, the voice input including a particularly pronounced word. The method further includes comparing, by the processing device, the particularly pronounced word to one or more reference words in a reference table, determining, by the processing device, that the particularly pronounced word has been potentially mispronounced by the user based on the one or more reference words in the reference table, determining, by the processing device, a corrected pronunciation suggestion from the one or more reference words, and providing, via a user interface, the corrected pronunciation suggestion to the user.Type: GrantFiled: December 5, 2017Date of Patent: January 28, 2020Assignee: TOYOTA MOTOR ENGINEERING & MANUFACUTURING NORTH AMERICA, INC.Inventors: Scott A. Friedman, Prince R. Remegio, Tim Uwe Falkenmayer, Roger Akira Kyle, Ryoma Kakimi, Luke D. Heide, Nishikant Narayan Puranik
-
Patent number: 10490188Abstract: A method and system for language selection and synchronization in a vehicle are provided. The method includes receiving an audio representative of sounds captured within a vehicle, recognizing a language category for propagating information to a user of the vehicle according to the received audio, selecting the language category of the vehicle system according to the recognized language category in response to receiving a user acknowledgment, synchronizing the language category among a plurality of vehicle systems, and propagating information to the user of the vehicle using the synchronized language category.Type: GrantFiled: September 12, 2017Date of Patent: November 26, 2019Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.Inventors: Ming Michael Meng, Krishna Buddharaju
-
Patent number: 10460746Abstract: A process for real-time language detection and language heat map data structure modification includes a computing device receiving, from a first electronic audio source, first audio content and identifying a first geographic location of the first audio content. The computing device then determines that the first audio content includes first speech audio and identifies a first language in which the first speech audio is spoken. A first association is created between the first geographic location and the first language, and a real-time language heat-map data structure modified to include the created first association. Then a further action is taken by the computing device as a function of the modified real-time language heat-map data structure.Type: GrantFiled: October 31, 2017Date of Patent: October 29, 2019Assignee: MOTOROLA SOLUTIONS, INC.Inventors: Fabio M. Costa, Alejandro G. Blanco, Patrick D. Koskan, Adrian Ho Yin Ng, Boon Beng Lee
-
Patent number: 10380912Abstract: A language learning system with automated user created content to mimic native language acquisition processes. The system replaces less appealing learning content with a student's favorite content to increase motivation to learn and by deemphasizing the goal to understand the content, it allows students to fully use their natural ability to listen and reproduce sounds, which are the most effective process for acquiring listening and speaking skills.Type: GrantFiled: September 22, 2016Date of Patent: August 13, 2019Inventor: Mongkol Thitithamasak
-
Patent number: 10380263Abstract: Systems and methods for translating a source segment are disclosed. In embodiments, a computer-implemented method for translating a source segment comprises receiving, by a computing device, the source segment in a first language to be translated into a second language; identifying, by the computing device, linguistic markers within the source segment and associated noise values to produce a tagged source segment, wherein the linguistic markers are associated with one or more linguistic patterns likely to introduce noise into a translation channel; transforming, by the computing device, the tagged source segment into an amplified source segment; and sending, by the computing device, the amplified source segment to a machine translation module, wherein the machine translation module is configured to process the amplified source segment to produce a return amplified match in the second language.Type: GrantFiled: January 25, 2017Date of Patent: August 13, 2019Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Alejandro Martinez Corria, Santiago Pont Nesta, Consuelo RodrÃguez Magro, Francis X. Rojas, Linda F. Traudt, Saroj K. Vohra
-
Patent number: 10283013Abstract: A computer-assisted system and method for foreign language instruction. A motion picture having audio dialog in a target language is stored, in a computer-readable form, with chapter divisions, scene subdivisions and dialog sequence sub-subdivisions. In an Engage Mode, the motion picture is played on a display screen sequence-by-sequence, scene-by-scene, and chapter-by-chapter for a student listening to the audio dialog on a speaker. Interlinear target and source language subtitles are provided with interactive capabilities accessed through cursor movement or other means. The interlinear subtitles may be semantically color-mapped. After selecting a scene to view, the student is progressed through a series of modules that break-down and dissect each dialog sequence of the scene. The student studies each dialog sequence before moving to the next scene. Likewise, all scenes in a chapter are studied before moving to the next chapter and ultimately completing the motion picture.Type: GrantFiled: May 13, 2014Date of Patent: May 7, 2019Assignee: Mango IP Holdings, LLCInventors: Kimberly Cortes, Jason Teshuba, Michael Teshuba, Ryan Whalen, Michael Goulas, Anthony Ciannamea, Lilia Mouma
-
Patent number: 10025770Abstract: A content generation service is described that generates content for electronic documents in different languages based upon templates. The templates may include paragraph templates composed of sentence types including sentence templates. The sentence templates may further include variables having corresponding attributes. Each of the paragraph templates, sentence templates, and attributes may be hierarchically organized. The content generation service may obtain data describing an item of interest, such as a travel item. The obtained data may further specify a document language, section and paragraph for which content is to be generated. Content is generated for variables in hierarchical order, with higher ranked paragraphs considered first. Within the highest ranked paragraph, a sentence type is selected and the variables within the highest ranked sentence template of the sentence type are considered.Type: GrantFiled: June 3, 2013Date of Patent: July 17, 2018Assignee: Expedia, Inc.Inventors: Rene Waksberg, Donny Hsu, Patrick Bradley
-
Patent number: 9984068Abstract: Systems, apparatus, computer-readable media, and methods to provide filtering and/or search based at least in part on semantic representations of words in a document subject to the filtering and/or search are disclosed. Furthermore key words for conducting the filtering and/or search, such as taboo words and/or search terms, may be semantically compared to the semantic representation of the words in the document. A common semantic vector space, such as a base language semantic vector space, may be used to compare the key word semantic vectors and the semantic vectors of the words of the document, regardless of the native language in which the document is written or the language in which the key words are provided.Type: GrantFiled: September 18, 2015Date of Patent: May 29, 2018Assignee: McAfee, LLCInventors: Edward Dixon, Marcin Dziduch, Craig Olinsky
-
Patent number: 9939922Abstract: Embodiments of the present invention discloses an input method of Chinese pinyin. The method comprises: obtaining an operation position and an operation duration of inputting a character through a character input platform; determining a combination of pinyin letters corresponding to the operation position according to mapping relationships between operation positions of the character input platform and character information; selecting a pinyin letter from the combination of pinyin letters corresponding to the operation position or selecting the combination of pinyin letters corresponding to the operation position as a character input through the character input platform according to mapping relationships between operation durations each corresponding to one of the operation positions of the character input platform and character information. The embodiments of the present invention further discloses a terminal.Type: GrantFiled: January 13, 2014Date of Patent: April 10, 2018Assignee: DONGGUAN GOLDEX COMMUNICATION TECHNOLOGY CO., LTD.Inventor: Ran Liu
-
Patent number: 9805028Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for translating terms using numeric representations. One of the methods includes obtaining data that associates each term in a vocabulary of terms in a first language with a respective high-dimensional representation of the term; obtaining data that associates each term in a vocabulary of terms in a second language with a respective high-dimensional representation of the term; receiving a first language term; and determining a translation into the second language of the first language term from the high-dimensional representation of the first language term and the high-dimensional representations of terms in the vocabulary of terms in the second language.Type: GrantFiled: September 17, 2015Date of Patent: October 31, 2017Assignee: Google Inc.Inventors: Ilya Sutskever, Tomas Mikolov, Jeffrey Adgate Dean, Quoc V. Le
-
Patent number: 9747271Abstract: According to various embodiments of the disclosure techniques for generating outgoing messages are disclosed. The technique includes receiving a request to generate an outgoing message for a recipient and retrieving one or more recipient preferences of the recipient from a recipient preferences database. The one or more recipient preferences relate to customization of messages that are to be delivered to the recipient. The technique further includes retrieving a message template from a plurality of message templates stored in a message template database based on the request and the one or more recipient preferences. The technique also includes generating the outgoing message based on the retrieved message template and the one or more recipient preferences, and providing the outgoing message to the recipient.Type: GrantFiled: January 28, 2016Date of Patent: August 29, 2017Assignee: GOOGLE INC.Inventors: Kirill Buryak, Andrew Swerdlow, Luke Hiro Swartz, Cibu Chalissery Johny
-
Patent number: 9710463Abstract: A two-way speech-to-speech (S2S) translation system actively detects a wide variety of common error types and resolves them through user-friendly dialog with the user(s). Examples include features including one or more of detecting out-of-vocabulary (OOV) named entities and terms, sensing ambiguities, homophones, idioms, ill-formed input, etc. and interactive strategies for recovering from such errors. In some examples, different error types are prioritized and systems implementing the approach can include an extensible architecture for implementing these decisions.Type: GrantFiled: December 6, 2013Date of Patent: July 18, 2017Assignee: Raytheon BBN Technologies Corp.Inventors: Rohit Prasad, Rohit Kumar, Sankaranarayanan Ananthakrishnan, Sanjika Hewavitharana, Matthew Roy, Frederick Choi
-
Patent number: 9652453Abstract: A system and method for estimating parameters for features of a translation scoring function for scoring candidate translations in a target domain are provided. Given a source language corpus for a target domain, a similarity measure is computed between the source corpus and a target domain multi-model, which may be a phrase table derived from phrase tables of comparative domains, weighted as a function of similarity with the source corpus. The parameters of the log-linear function for these comparative domains are known. A mapping function is learned between similarity measure and parameters of the scoring function for the comparative domains. Given the mapping function and the target corpus similarity measure, the parameters of the translation scoring function for the target domain are estimated. For parameters where a mapping function with a threshold correlation is not found, another method for obtaining the target domain parameter can be used.Type: GrantFiled: April 14, 2014Date of Patent: May 16, 2017Assignee: XEROX CORPORATIONInventors: Prashant Mathur, Sriram Venkatapathy, Nicola Cancedda
-
Patent number: 9646002Abstract: There is provided a method that includes displaying, on a display, a viewing pane of available video contents including a first video content, receiving a selection of the first video content from the available video contents, transmitting a language selection and the selection of the first video content to a server, receiving a language content corresponding to the language selection and the selection of the first video content from the server, and displaying, on the display, the first video content in synchronization with playing the language content.Type: GrantFiled: December 15, 2014Date of Patent: May 9, 2017Assignee: Disney Enterprises, Inc.Inventors: Artin Nazarian, Greg Head, Paul Marz
-
Patent number: 9471265Abstract: An image processing system includes: a translating unit that translates text contained in translation target data to be subjected to translation processing and acquires translation data; and a generating unit that generates image data containing the translation target data and translation image data in which the text of the translation target data is replaced with the translation data.Type: GrantFiled: October 27, 2014Date of Patent: October 18, 2016Assignee: RICOH COMPANY, LTD.Inventor: Takeshi Shimazaki
-
Patent number: 9418102Abstract: A pair search word, which is obtained by forming a pair of a preceding search word and a subsequent search word in the search times, is generated in accordance with an order of the search times, from search words in which intervals between the search times associated with the identical user specifying information are within a predetermined time, with reference to a search word memory means which stores the search words. A first appearance count of a pair search word among generated pair search words is calculated, a second appearance count of a reverse order pair search word obtained by reversing an order of a search time of the specific pair search word is calculated, and, when a magnitude relationship between the first appearance count and the second appearance account satisfies predetermined conditions, the preceding search word and the subsequent search word are stored as a thesaurus.Type: GrantFiled: August 10, 2012Date of Patent: August 16, 2016Assignee: Rakuten, Inc.Inventors: Teiko Inoue, Taku Yasui, Kenji Sugiki
-
Patent number: 9419938Abstract: A system for associating an action of a user with a message corresponding to the action, where the message is visually displayed on a series of sequentially disposed discrete mats. The message is uttered by the user and the message is verified using a speech analyzer. The system includes at least two mats, each mat having a display, a transmitter, a receiver, and a presence sensor indicating the presence of the user and configured to indicate the intention of the user to add an answer to a composite answer. The system also includes an audio receiver that receives an audio input from the user and an indicator device that indicates whether the mat is a head mat.Type: GrantFiled: September 3, 2013Date of Patent: August 16, 2016Inventor: Susan Jean Carulli
-
Patent number: 9400772Abstract: A method and device for selecting a word to be defined in a mobile communication terminal having an electronic dictionary function. The method includes selecting a word in a displayed text document in response to a first input, displaying the selected word in a search window, searching for the displayed word in response to a request to search for the displayed word, displaying information resulting from the search, and terminating display of the information and displaying the text document.Type: GrantFiled: April 15, 2014Date of Patent: July 26, 2016Assignee: Samsung Electronics Co., Ltd.Inventors: Seok-Gon Lee, Jae-Gon Son, Ki-Tae Kim, Yong-Hee Han
-
Patent number: 9361908Abstract: Systems and methods are provided for scoring non-native speech. Two or more speech samples are received, where each of the samples are of speech spoken by a non-native speaker, and where each of the samples are spoken in response to distinct prompts. The two or more samples are concatenated to generate a concatenated response for the non-native speaker, where the concatenated response is based on the two or more speech samples that were elicited using the distinct prompts. A concatenated speech proficiency metric is computed based on the concatenated response, and the concatenated speech proficiency metric is provided to a scoring model, where the scoring model generates a speaking score based on the concatenated speech metric.Type: GrantFiled: July 24, 2012Date of Patent: June 7, 2016Assignee: Educational Testing ServiceInventors: Klaus Zechner, Su-Youn Yoon, Lei Chen, Shasha Xie, Xiaoming Xi, Chaitanya Ramineni
-
Patent number: 9323845Abstract: A portable communication device for extracting a user interest comprises a term vector generation unit for generating, based on types of text data stored in the portable communication device, a term vector representing each text data, a subject classification tree storage unit for storing a subject classification tree, which is a tree structure in which multiple nodes, each including at least one training data and representing a subject, are connected to one another, and a similarity calculation unit for calculating a similarity between the term vector and the training data for each node in the subject classification tree. The similarity calculation unit extracts a node name representing the user interest from the subject classification tree based on the similarity.Type: GrantFiled: January 31, 2011Date of Patent: April 26, 2016Assignee: KOREA UNIVERSITY RESEARCH AND BUSINESS FOUNDATIONInventors: Sang Keun Lee, Jong Woo Ha, Jung Hyun Lee
-
Patent number: 9071916Abstract: A telephone handset (1), e.g. a mobile phone, is equipped with an add-on module (41). Thereby, the module (41) comprises a microphone unit (9a) which is operated instead of the microphone unit intrinsically provided in the handset (1). Further, the module (41) comprises a short-range wireless transmission unit (23a) which is operated instead of a speaker unit (7) which is intrinsically provided in the handset (1). The short-range wireless communication unit (23a) establishes communication with a respective short-range wireless communication unit (19a) in a hearing device (10) as soon as a distance between the telephone handset (1) and the hearing device (10) drops below a predetermined threshold value as detected by a distance detection unit (21).Type: GrantFiled: March 11, 2008Date of Patent: June 30, 2015Assignee: PHONAK AGInventors: Valentin Chapero-Rueda, Andi Vonlanthen, Lukas Florian Erni, Stefan Haenggi
-
Publication number: 20150125834Abstract: A real-time interactive collaboration system is provided, the system including at least: a collaboration board electronically connected between and amongst an instructor and one or more students, wherein the instructor and students are able to interact in real-time with previously uploaded objects that are part of a learning activity; means for the instructor and students to simultaneously share a predetermined view of the collaboration board; and means for the instructor and students to simultaneously share a portion of a predetermined view of the collaboration board at the discretion of the instructor. Methods of use for the foregoing system are also provided.Type: ApplicationFiled: June 11, 2014Publication date: May 7, 2015Applicant: BERLITZ INVESTMENT CORPORATIONInventor: Claudia Marcela Mendoza Tascon
-
Publication number: 20150118660Abstract: Novel systems/methods for generating language learning lessons is disclosed in the present application. A learner is allowed to speak out loud a sentence in a foreign language while an instructor types down exactly how the sentence is spoken in a first column on an instruction sheet. The instructor then types a correct sentence marked with symbols noting the incorrect word and missing word in a second column parallel to the first column. Unfamiliar word is then placed in a third column and mispronounced word is placed in a fourth column. The present invention further has other features to allow the learner(s) to see his/her verbal mistakes, become conscious of these mistakes and thereby control his/her way of using those words in the correct way much more efficiently.Type: ApplicationFiled: October 24, 2013Publication date: April 30, 2015Inventor: Vladimir Kovin
-
Publication number: 20150099247Abstract: Apparatus for aiding learning by a person comprises a cover or shield (1) for concealing from the person a part of the person's body, and a webcam (6) and a screen (10) for visually displaying to the person, during concealment of the concealed body part, images of a part of the person's body not in direct view of the person. The apparatus may be used in the learning of a skill, such as hand-writing. In another embodiment, the shield is a collar worn to conceal part of the wearer's body, and the webcam and screen display the concealed body part in real time to the wearer. This apparatus can be used in many applications, such as to learn sports activities or to correct body image, posture or movement.Type: ApplicationFiled: March 15, 2013Publication date: April 9, 2015Inventor: Jacklyn Bryant
-
Patent number: 8996352Abstract: Various embodiments described herein facilitate multi-lingual communications. The systems and methods of some embodiments enable multi-lingual communications through different modes of communication including, for example, Internet-based chat, e-mail, text-based mobile phone communications, postings to online forums, postings to online social media services, and the like. Certain embodiments implement communication systems and methods that translate text between two or more languages. Users of the systems and methods may be incentivized to submit corrections for inaccurate or erroneous translations, and may receive a reward for these submissions. Systems and methods for assessing the accuracy of translations are described.Type: GrantFiled: June 3, 2014Date of Patent: March 31, 2015Assignee: Machine Zone, Inc.Inventors: Francois Orsini, Nikhil Bojja, Arun Nedunchezhian
-
Publication number: 20150088573Abstract: An online learning platform for improvement of language conversation skills with management tools to control timing and monetization of the session. Web platform includes tutor search, scheduling, social networking, video viewer, and payment processes in a fully integrated package to create open and safe environment for language students. Students use the platform search process to find a tutor that matches their conversation needs. Tutors and Students use the integrated scheduling process to coordinate conversation sessions, called Vee-sessions. Students and tutors can communicate via social networking tools within the platform as well as linked to other existing social networking platforms. A student and tutor conduct an online video conversation and use language-learning tools to assist improvement in student oral communication abilities.Type: ApplicationFiled: September 19, 2014Publication date: March 26, 2015Inventor: Andres Abeyta
-
Patent number: 8990082Abstract: A method for scoring non-native speech includes receiving a speech sample spoken by a non-native speaker and performing automatic speech recognition and metric extraction on the speech sample to generate a transcript of the speech sample and a speech metric associated with the speech sample. The method further includes determining whether the speech sample is scorable or non-scorable based upon the transcript and speech metric, where the determination is based on an audio quality of the speech sample, an amount of speech of the speech sample, a degree to which the speech sample is off-topic, whether the speech sample includes speech from an incorrect language, or whether the speech sample includes plagiarized material. When the sample is determined to be non-scorable, an indication of non-scorability is associated with the speech sample. When the sample is determined to be scorable, the sample is provided to a scoring model for scoring.Type: GrantFiled: March 23, 2012Date of Patent: March 24, 2015Assignee: Educational Testing ServiceInventors: Su-Youn Yoon, Derrick Higgins, Klaus Zechner, Shasha Xie, Je Hun Jeon, Keelan Evanini
-
Publication number: 20150079553Abstract: The present invention provides a method and system for teaching a language to a user. An index with expression records is accessed. Each expression record has at least one image search expression. One of the expression records in the index is examined to identify at least one image search expression for the examined record. A translated expression is acquired. At least one image associated with each identified image search expression is sought. The translated expression and at least one of the images are displayed to the user.Type: ApplicationFiled: September 16, 2013Publication date: March 19, 2015Inventor: Jeffrey L. Arnold
-
Publication number: 20150079554Abstract: Disclosed herein are a language learning system and a language learning method. A language learning system includes a user terminal configured to receive utterance information of a user as a speech or text type and to output learning data transferred through a network to the user as the speech or text type, and a main server which includes a learning processing unit configured to analyze a meaning of the utterance information of the user, to generate at least one response utterance candidate corresponding to dialogue learning in a predetermined domain to induce a correct answer of the user, and to connect a dialogue depending on the domain and a storage unit linked with the learning processing unit and configured to store material data or a dialogue model depending on the dialogue learning.Type: ApplicationFiled: January 3, 2013Publication date: March 19, 2015Inventors: Gary Geunbae Lee, Hyungjong Noh, Kyusong Lee
-
Method And Apparatus For Teaching The Pronunciation of Alphabet Characters Of An Unfamiliar Language
Publication number: 20150064664Abstract: Method and apparatus for teaching a student the pronunciation of alphabet characters of an unfamiliar language include providing a group of distinct sequential narrative media segments, each narrative media segment including a partial transliterated narrative text comprising words of a base language known to the student formed by base characters of a base alphabet, and wherein a number of target characters comprising a subset of the target alphabet are substituted in the text for corresponding base characters of the base alphabet, the sequential media segments containing successively increasing numbers of target characters of the target alphabet substituted for corresponding base characters of the base alphabet; and displaying each narrative media segment to the student sequentially.Type: ApplicationFiled: August 27, 2013Publication date: March 5, 2015Applicant: Danimar Media LLCInventor: Marisa Gobuty -
Publication number: 20150056580Abstract: The present invention provides a pronunciation correction method for assisting a foreign language learner in correcting a position of a tongue or a shape of lips when pronouncing a foreign language. According to a implementation of this invention, the pronunciation correction method comprises receiving an audio signal constituting pronunciation of a user for a phonetic symbol selected as a target to be practiced, analyzing the audio signal, generating a tongue position image according to the audio signal based on the analysis results, and displaying the generated tongue position image.Type: ApplicationFiled: August 25, 2014Publication date: February 26, 2015Applicant: SELI INNOVATIONS INC.Inventors: Jin Ho KANG, Moon Kyoung CHO, Yong Min LEE
-
Publication number: 20150044644Abstract: A training method for language learning is provided, which includes: building user data of a user in a storage unit; retrieving target contents from a CTC database to a UTC database of the storage unit in accordance with difficulty levels of the target contents and the acquired level; setting a test by selecting test contents of the target contents in the UTC database as questions of the test; scoring the user's answers to the questions; performing an AL adjustment for the acquired level in accordance with a score of the test; performing a DL adjustment for the difficulty levels in accordance with correctness of each answer to the respective question in the test; and uploading the target contents with changed difficulty levels to the CTC database to update the target contents in the CTC database.Type: ApplicationFiled: August 12, 2013Publication date: February 12, 2015Applicant: SHEPHERD DEVELOPMENT LLC.Inventor: Ming-Wei Hsu