Translation Machine Patents (Class 704/2)
  • Patent number: 10769387
    Abstract: Implementations of the present disclosure are directed to a method, a system, and an article for translating chat messages. An example method can include: receiving an electronic text message from a client device of a user; normalizing the electronic text message to generate a normalized text message; tagging at least one phrase in the normalized text message with a marker to generate a tagged text message, the marker indicating that the at least one phrase will be translated using a rule-based system; translating the tagged text message using the rule-based system and a machine translation system to generate an initial translation; and post-processing the initial translation to generate a final translation.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: September 8, 2020
    Assignee: MZ IP Holdings, LLC
    Inventors: Pidong Wang, Nikhil Bojja, Shiman Guo
  • Patent number: 10762306
    Abstract: A computing system comprising: a control unit configured to: receive an input request for a point of interest; determine a first linguistic context for the input request based on one or more input request characteristics, a user profile, a location, and a first connotation database; translate the input request to a second linguistic context based on a translation flag and a second connotation database, wherein the second connotation database is mapped to the first connotation database; and a user interface, coupled to the control unit, configured to display a translation result for the input request based on the first linguistic context or the second linguistic context.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: September 1, 2020
    Assignee: Telenav, Inc.
    Inventors: Casey Carter, Shalu Grover, Gregory Stewart Aist, Jinghai Ren, Michele Santamaria
  • Patent number: 10762303
    Abstract: Provided are a method by which a translation server collects translated content from at least one device, and a translation server therefor. The translation server may provide, to users using at least one device, a translation request category for participation in translation, and collect translated content from the users. The translation server may provide rewards to the users providing the translated content.
    Type: Grant
    Filed: December 26, 2016
    Date of Patent: September 1, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyun-kyung Kim, Hak-jung Kim, Yoon-jin Yoon, Haeng-sun Lim
  • Patent number: 10762302
    Abstract: A translation method includes: selecting a source word from a source sentence; generating mapping information including location information of the selected source word mapped to the selected source word in the source sentence; and correcting a target word, which is generated by translating the source sentence, based on location information of a feature value of the target word and the mapping information.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: September 1, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jihyun Lee, Hwidong Na, Hoshik Lee
  • Patent number: 10755048
    Abstract: The present disclosure discloses an artificial intelligence based method and apparatus for segmenting a sentence. A specific embodiment of the method includes: lexing a to-be-segmented original sentence to obtain a set of words in the original sentence; performing sentence segmentation steps on a to-be-segmented sentence having an initial value being the original sentence; using the sub-sentence fragment not belonging to the set of words as the to-be-segmented sentence and continuing to perform the sentence segmentation steps, if the sub-sentence fragment not belonging to the set of words exists; and storing the original sentence in association with the plurality of sub-sentence fragments each time obtained by performing the sentence segmentation steps. The embodiment generates a segmentation result obtained by performing a multi-layered segmentation to the original sentence.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: August 25, 2020
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventor: Yiming Wang
  • Patent number: 10755701
    Abstract: The present disclosure proposes a method and an apparatus for converting English speech information into a text. The method may include: receiving the English speech information inputted by a user, determining a target speech recognition model according to a preset algorithm, and identifying original phonemes of the English speech information by applying the target speech recognition model; performing a matching on the original phonemes by applying a phonetic model generated by pre-training English texts and a preset probability model, and determining a target phoneme matched successfully; and acquiring a target English text corresponding to the target phoneme, and displaying the target English text on a speech conversion textbox.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: August 25, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Qiang Cheng, Sheng Qian
  • Patent number: 10747963
    Abstract: A communication system is described. The communication system including an automatic speech recognizer configured to receive a speech signal and to convert the speech signal into a text sequence. The communication also including a speech analyzer configured to receive the speech signal. The speech analyzer configured to extract paralinguistic characteristics from the speech signal. The communication system further includes a translator coupled with the automatic speech recognizer. The translator configured to convert the text sequence from a first language to a second language. In addition, the communication system includes a speech output device coupled with the automatic speech recognizer and the speech analyzer. The speech output device configured to convert the text sequence into an output speech signal based on the extracted paralinguistic characteristics.
    Type: Grant
    Filed: October 30, 2011
    Date of Patent: August 18, 2020
    Assignee: SPEECH MORPHING SYSTEMS, INC.
    Inventor: Fathy Yassa
  • Patent number: 10740571
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating network outputs using insertion operations.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: August 11, 2020
    Assignee: Google LLC
    Inventors: Jakob D. Uszkoreit, Mitchell Thomas Stern, Jamie Ryan Kiros, William Chan
  • Patent number: 10741179
    Abstract: A configuration provides quality control compliance for a plurality of machine language interpreters. A processor receives a plurality of requests for human-spoken language interpretation from a first human-spoken language to a second human-spoken language. Further, processor routes the plurality of requests to a plurality of machine language interpreters. In addition, an artificial intelligence system associated the plurality of machine language interpreters determines one or more quality control criteria. The processor also monitors compliance of the one or more quality control criteria by the plurality of machine language interpreters during simultaneously occurring machine language interpretations performed by the machine language interpreters.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: August 11, 2020
    Assignee: Language Line Services, Inc.
    Inventors: Jeffrey Cordell, Lindsay D'Penha, Julia Berke
  • Patent number: 10733389
    Abstract: A computer-implemented method for leveraging computer aided input segmentations is provided. The computer aided input segmentations identify divisions in a source text. The division are utilized to translate the source text from a first language to a second language. In this regard, segmentation boundaries are designated within the source text based on the computer aided input segmentations, which determine the divisions. Each division is automatically translated into the second language to generate translated segments corresponding to the division. These translated segments are combined to generate a translated text.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: August 4, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fei Huang, Jian-Ming Xu
  • Patent number: 10733390
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for language modeling. In one aspect, a system comprises: a masked convolutional decoder neural network that comprises a plurality of masked convolutional neural network layers and is configured to generate a respective probability distribution over a set of possible target embeddings at each of a plurality of time steps; and a modeling engine that is configured to use the respective probability distribution generated by the decoder neural network at each of the plurality of time steps to estimate a probability that a string represented by the target embeddings corresponding to the plurality of time steps belongs to the natural language.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: August 4, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Nal Emmerich Kalchbrenner, Karen Simonyan, Lasse Espeholt
  • Patent number: 10726210
    Abstract: A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including receiving a first text written in a first language, generating a second text written in a second language, the second text being generated by translating the first text into the second language, generating a third text written in the first language, the third text being generated by translating the second text into the first language, specifying one or more first words included in the third text, extracting one or more documents including the one or more first words from a plurality of documents stored in a storage device, and outputting information regarding the one or more extracted documents.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: July 28, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Takuya Yoshida, Ryuichi Takagi
  • Patent number: 10726211
    Abstract: Systems for facilitating acquisition of a language by a user receive initial user input representing a phrase in a first language. The phrase is translated to a second language then parsed to generate constituent data indicative of one or more groups of related words within the phrase. The constituents may constitute comprehensible linguistic inputs for communication to a user when the initial phrase is not a comprehensible input. Output data is generated to output the constituents to the user. Based on the relationships between the words, the characteristics of the second language, or user data indicative of the user's previous comprehension of constituents, the amplitude, frequency, output rate, or other audible characteristic of output may be modified to emphasize or deemphasize portions of the constituents.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: July 28, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Nikolas Wolfe, John Thomas Beck, Lee Michael Bossio, Logan Gray Chittenden, Matthew Darren Choate, Matthew Girouard
  • Patent number: 10728405
    Abstract: A display capable of communicating with a server includes: an auxiliary storage that stores character information of a specific language, the character information including a character string contained in a screen when the screen is displayed; an acceptor that accepts selection of a language to be used in displaying the screen; a receiver that receives a translation result obtained by translating the character information of the specific language into a new language from the server when the language, selection of which has been accepted by the acceptor, is the new language different from the specific language; a hardware processor that stores, in the auxiliary storage, character information of the new language, the character information being created based on the translation result received by the receiver; and a display part that displays the screen based on the character information of the new language.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: July 28, 2020
    Assignee: Konica Minolta, Inc.
    Inventor: Kazutoshi Yu
  • Patent number: 10719763
    Abstract: As provided herein, a domain model, corresponding to a domain of an image, may be merged with a pre-trained fundamental model to generate a trained fundamental model. The trained fundamental model may comprise a feature description of the image converted into a binary code. Responsive to a user submitting a search query, a coarse image search may be performed, using a search query binary code derived from the search query, to identify a candidate group, comprising one or more images, having binary codes corresponding to the search query binary code. A fine image search may be performed on the candidate group utilizing a search query feature description derived from the search query. The fine image search may be used to rank images within the candidate group based upon a similarity between the search query feature description and feature descriptions of the one or more images within the candidate group.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: July 21, 2020
    Assignee: Oath Inc.
    Inventor: JenHao Hsiao
  • Patent number: 10719668
    Abstract: A system for translation from a first human language to a second language including one or more processors and one or more non-transitory memory units coupled to said one or more processors storing computer readable program instructions, wherein the computer readable program instructions configure the one or more processors to perform the steps of: receive an input representation of information in the first language, convert the input representation of information in the first language to one or more sets of one or more marked-lemma dependency trees (MDTs), convert said one or more sets of one or more marked-MDTs to a representation of information in said second language, and output said representation of information in said second language, wherein the MDTs are represented in a mathematically-equivalent or isomorphic memory structure using one of word embeddings, sense embeddings, tree kernels, capsules, pose vectors, embeddings, and vectorizations.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: July 21, 2020
    Inventor: Graham Morehead
  • Patent number: 10713444
    Abstract: An apparatus for providing a translations editor on at least one user terminal. The apparatus includes a content data display unit for displaying text data and image data, which are extracted from content data, together; and a text data editor unit including a first-language text display unit for displaying a first-language text included in the text data, and a second-language text display unit in which a translation of the first-language text is input as a second-language text by a user of the user terminal.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: July 14, 2020
    Assignee: NAVER Webtoon Corporation
    Inventors: Soo Yeon Park, Seung Hwan Kim, Ju Han Lee, Ji Hoon Ha
  • Patent number: 10706236
    Abstract: Applied Artificial Intelligence Technology for Using Natural Language Processing and Concept Expression Templates To Train a Natural Language Generation System Disclosed herein is computer technology that applies natural language processing (NLP) techniques to training data to generate information used to train a natural language generation (NLG) system to produce output that stylistically resembles the training data. In this fashion, the NLG system can be readily trained with training data supplied by a user so that the NLG system is adapted to produce output that stylistically resembles such training data. In an example, an NLP system detects a plurality of linguistic features in the training data. These detected linguistic features are then aggregated into a specification data structure that is arranged for training the NLG system to produce natural language output that stylistically resembles the training data.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: July 7, 2020
    Assignee: NARRATIVE SCIENCE INC.
    Inventors: Daniel Joseph Platt, Nathan D. Nichols, Michael Justin Smathers, Jared Lorince
  • Patent number: 10699073
    Abstract: Implementations of the present disclosure are directed to a method, a system, and a computer program storage device for identifying a language in a message. Non-language characters are removed from a text message to generate a sanitized text message. An alphabet and/or a script are detected in the sanitized text message by performing at least one of (i) an alphabet-based language detection test to determine a first set of scores and (ii) a script-based language detection test to determine a second set of scores. Each score in the first set of scores represents a likelihood that the sanitized text message includes the alphabet for one of a plurality of different languages. Each score in the second set of scores represents a likelihood that the sanitized text message includes the script for one of the plurality of different languages.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: June 30, 2020
    Assignee: MZ IP Holdings, LLC
    Inventors: Nikhil Bojja, Pidong Wang, Shiman Guo
  • Patent number: 10693829
    Abstract: The present disclosure is directed toward systems and methods for providing translations of electronic messages via a social networking system. For example, systems and methods described herein involve determining whether to provide an electronic message or a translation of the electronic message to a recipient based on social networking activities of the recipient. Furthermore, systems and methods described herein can provide a translation of an electronic message based on an analysis of social networking activities of one or more recipients of the electronic message.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: June 23, 2020
    Assignee: FACEBOOK, INC.
    Inventors: Matthias Eck, Necip Fazil Ayan, Ying Zhang, Kay Rottman, Lukasz Langa
  • Patent number: 10691898
    Abstract: Disclosed is a method for synchronizing visual information and auditory information characterized by extracting visual information included in video, recognizing auditory information in a first language that is included in a speech in the first language, associating the visual information with the auditory information in the first language, translating the auditory information in the first language to auditory information in a second language, and editing at least one of the visual information with the auditory information in the second language so as to associate the visual information and the auditory information in the second language with each other.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: June 23, 2020
    Assignee: HITACHI, LTD.
    Inventors: Qinghua Sun, Takeshi Homma, Takashi Sumiyoshi, Masahito Togami
  • Patent number: 10691113
    Abstract: A robotic process control system that is operable to provide automation of at least one electromechanical device wherein the programming language of the present invention utilizes commands, rules and argument within a virtual environment to provide control of an electromechanical device. The present invention includes an object oriented methodology facilitated by the software thereof that defines three object types being an atom object type, a process object type and an event object type. The object types reside in a virtual environment hosted on a computing device that is operably coupled to the electromechanical device wherein the object types are representative of the electromechanical device or a portion thereof. The present invention utilizes a programming language that utilizes English language statements and further creates digitope data for all of the objects within the present invention. The methodology of the present invention examines spatial relations between all of the objects.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: June 23, 2020
    Inventor: Anthony Bergman
  • Patent number: 10685047
    Abstract: A system for processing queries from a user device may first generate an augmented query by determining weight values and synonyms for at least a portion of the parameters in the query, and adding or removing one or more query parameters. Correspondence between the augmented query and an existing set of data entries may be used to determine a subset of data entries that may be responsive to the query. Correspondence may then be determined between the augmented query and previous queries that were addressed by the subset of data entries, to determine a particular previous query having the greatest correspondence with the augmented query. The data entry associated with the particular previous query may be used to generate a response to the query received from the user device.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: June 16, 2020
    Assignee: TOWNSEND STREET LABS, INC.
    Inventors: Pratyus Patnaik, Marissa Mary Montgomery, Jay Srinivasan, Suchit Agarwal, Rajhans Samdani, David Colby Kaneda, Nathaniel Ackerman Rook
  • Patent number: 10685006
    Abstract: A content management system is provided that synchronizes translations between content items using labels. Labels can be persisted in the system as managed objects separate from content objects. Because the labels may be separate managed objects from the content, the labels can be implemented in a manner that does not change the content items or disrupt the lifecycle of the content.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: June 16, 2020
    Assignee: OPEN TEXT SA ULC
    Inventors: Michael Gerard Jaskiewicz, David Alan Stiles
  • Patent number: 10679014
    Abstract: A translation information providing apparatus includes a forward translator that generates a first translation by translating a first original sentence in a first language into a second language, a back translator that generates a first back translation by back-translating the first translation into the first language, and a translation result outputter that outputs at least either the first original sentence or the first translation and, as the first back translation, a back translation that semantically matches or is semantically similar to the first original sentence.
    Type: Grant
    Filed: May 24, 2018
    Date of Patent: June 9, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Masaki Yamauchi, Nanami Fujiwara, Masahiro Imade
  • Patent number: 10679616
    Abstract: A non-transitory processor-readable medium storing code representing instructions to be executed by a processor includes code to cause the processor to receive acoustic data representing an utterance spoken by a language learner in a non-native language in response to prompting the language learner to recite a word in the non-native language and receive a pronunciation lexicon of the word in the non-native language. The pronunciation lexicon includes at least one alternative pronunciation of the word based on a pronunciation lexicon of a native language of the language learner. The code causes the processor to generate an acoustic model of the at least one alternative pronunciation in the non-native language and identify a mispronunciation of the word in the utterance based on a comparison of the acoustic data with the acoustic model. The code causes the processor to send feedback related to the mispronunciation of the word to the language learner.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: June 9, 2020
    Assignee: ROSETTA STONE LTD.
    Inventors: Theban Stanley, Kadri Hacioglu, Vesa Siivola
  • Patent number: 10671814
    Abstract: A translation device includes an input unit, a controller, a notification unit, an output unit, and a memory. The input unit inputs an utterance of a speaker to generate utterance data. The controller determines accuracy of a translation result when translated utterance data from the utterance data is obtained. The notification unit notifies the speaker of the determination result of the controller. The output unit outputs a translated utterance according to the translated utterance data. The memory stores dictionary data associating utterance data with translated utterance data. The controller performs translation based on the dictionary data. When the controller determines that the accuracy of the translation result is lower than a predetermined value, the controller controls the output unit not to output the translated utterance, and controls the notification unit to issue a notification regarding the determination result.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: June 2, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Takayuki Hayashi, Tomokazu Ishikawa
  • Patent number: 10664656
    Abstract: A computer-implemented method of generating an augmented electronic text document comprises establishing a directed multigraph where each vertex is associated with a separate language and is connected to at least one other one of the vertices by an oriented edge indicative of a machine translation engine's ability to translate between languages associated with the vertices connected by the oriented edge with acceptable performance. The directed multigraph is then traversed starting at a predetermined origin vertex associated with an original language of the original electronic text document by randomly selecting an adjacent vertex pointed to by an oriented edge connected to the predetermined origin vertex and causing a machine translation engine to translate the original electronic text document from the original language to a language associated with the selected vertex.
    Type: Grant
    Filed: June 20, 2018
    Date of Patent: May 26, 2020
    Assignee: VADE SECURE INC.
    Inventors: Sebastien Goutal, Maxime Marc Meyer
  • Patent number: 10664144
    Abstract: An electronic device displays at least a portion of an electronic document with a predefined page layout at a first magnification level on a display; detects a first input indicating a first insertion point in the document, where the first insertion point is proximate to a first portion of text in the document; and in response to detecting the first input: selects a second magnification level different from the first magnification level, where the second magnification level is selected so as to display the first portion of text at a target text display size, and, while maintaining the predefined page layout of the document, displays, at the second magnification level, a portion of the document that includes the first portion of text.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: May 26, 2020
    Assignee: Apple Inc.
    Inventors: Christopher Douglas Weeldreyer, Martin J. Murrett, Matthew Todd Schomer, Kevin R. G. Smyth, Ian Joseph Elseth
  • Patent number: 10666896
    Abstract: An encoder and a re-packager circuit. The encoder may be configured to generate one or more bitstreams each having (i) a video portion, (ii) a subtitle placeholder channel, and (iii) a plurality of caption channels. The re-packager circuit may be configured to generate one or more re-packaged bitstreams in response to (i) one of the bitstreams and (ii) a selected one of the plurality of caption channels. The re-packaged bitstream moves the selected caption channel into the subtitle placeholder channel.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: May 26, 2020
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Brian A. Enigma
  • Patent number: 10664667
    Abstract: This information processing method includes: acquiring a first speech signal including a first utterance; acquiring a second speech signal including a second utterance; recognizing whether the speaker of the second utterance is a first speaker by comparing a feature value for the second utterance and a first speaker model; when the first speaker is recognized, performing speech recognition in a first language on the second utterance, generating text in the first language corresponding to the second utterance subjected to speech recognition in the first language, and translating the text in the first language into a second language; and, in a case where the first speaker is not recognized, performing speech recognition in the second language on the second utterance, generating text in the second language corresponding to the second utterance subjected to speech recognition in the second language, and translating the text in the second language into the first language.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: May 26, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Misaki Tsujikawa, Tsuyoki Nishikawa
  • Patent number: 10664293
    Abstract: An example non-transitory computer-readable medium to store machine-readable instructions that when accessed and executed by a processing resource cause a computing device to perform operations is described herein. The operations include connecting a first properties file with a corresponding application. The first properties file includes a plurality of text entries and associated location indicators. Text in the application is identified that is to be translated. The text to be translated corresponds to at least some of the text entries in the first properties file. The identified text is presented in the application, which provides context for the translation. The translation is received and a second properties file that includes the translation of the identified text and an associated location indicator is generated.
    Type: Grant
    Filed: July 29, 2015
    Date of Patent: May 26, 2020
    Assignee: MICRO FOCUS LLC
    Inventors: Asaf Azulai, Ori Abramovsky, Ben David
  • Patent number: 10656907
    Abstract: Embodiments are directed to methods and systems for deriving automation instructions. In one scenario, a computer system derives automation instructions by performing the following: rendering a user interface (UI) based on information from an information source and receiving natural language inputs from a user, where the natural language inputs specify an element description and an action type for UI elements rendered in the UI. The method also includes identifying UI elements in the UI that match the element descriptions in the natural language input and whose actions are performable according to their specified action type, and mapping the natural language inputs into executable code using information that corresponds to the identified UI elements.
    Type: Grant
    Filed: November 3, 2015
    Date of Patent: May 19, 2020
    Assignee: OBSERVEPOINT INC.
    Inventors: Robert K. Seolas, John Raymond Pestana, Tyler Broadbent, Gregory Larson, Alan Martin Feurelein
  • Patent number: 10645105
    Abstract: Provided are a network attack detection method and device.
    Type: Grant
    Filed: August 17, 2016
    Date of Patent: May 5, 2020
    Assignees: NSFOCUS INFORMATION TECHNOLOGY CO., LTD., NSFOCUS TECHNOLOGIES, INC.
    Inventor: Junli Shen
  • Patent number: 10643029
    Abstract: A method is performed at a computer for automatically correcting typographical errors. The computer selects a target word in a target sentence and identifies a target word therein as having a typographical error and first and second sequences of words separated by the target word as context. After identifying, among a database of grammatically correct sentences, a set of sentences having the first and second sequences of words, each sentence including a replacement word, the computer selects a set of candidate grammatically correct sentences whose corresponding replacement words have similarities to the target word above a pre-set threshold, Finally, the computer chooses, among the set of candidate grammatically correct sentences, a fittest grammatically correct sentence according to a linguistic model and replaces the target word in the target sentence with the replacement word within the fittest grammatically correct sentence.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: May 5, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Lou Li, Qiang Cheng, Feng Rao, Li Lu, Xiang Zhang, Shuai Yue, Bo Chen, Duling Lu
  • Patent number: 10643033
    Abstract: Embodiments of the present disclosure disclose a method and an apparatus for customizing a word segmentation model based on artificial intelligence, a device and a medium. The method includes the followings. A customized segmentation training corpus is acquired. A first preset word segmentation model is rectified with an increment training method or a weight intervention method, based on the customized segmentation training corpus, to obtain a customized word segmentation model corresponding to the customized segmentation training corpus.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: May 5, 2020
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Liqun Zheng, Jinbo Zhan, Qiugen Xiao, Zhihong Fu, Jingzhou He, Guyue Zhou
  • Patent number: 10635862
    Abstract: A method of facilitating natural language interactions, a method of simplifying an expression, a system for facilitating natural language interactions, and a system for simplifying an expression. The method of facilitating natural language interactions includes the steps of: processing a corpus to select a natural language expression from the corpus; simplifying the selected natural language expression into a plurality of simplified expression portions; wherein the plurality of simplified expression portions is representative of the meaning of the selected natural language expression; and presenting the plurality of simplified expression portions to a user so as to receive a user expression from the user for comparison with the selected natural language expression.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: April 28, 2020
    Assignee: City University of Hong Kong
    Inventor: John Sie Yuen Lee
  • Patent number: 10635863
    Abstract: Fragment recall and adaptive automated translation are disclosed herein. An example method includes determining that an exact or fuzzy match for a portion of a source input cannot be found in a translation memory, performing fragment recall by matching subsegments in the portion against one or more whole translation units stored in the translation memory, and matching subsegments in the portion against corresponding one or more subsegments inside the one or more matching whole translation units, and returning any of the one or more matching whole translation units and the one or more matching subsegments as a fuzzy match, as well as the translations of those subsegments.
    Type: Grant
    Filed: October 30, 2017
    Date of Patent: April 28, 2020
    Assignee: SDL Inc.
    Inventors: Erik de Vrieze, Keith Mills
  • Patent number: 10635859
    Abstract: A natural language recognizing apparatus including an input device, a processing device and a storage device is provided. The input device is configured to provide a natural language data. The storage device is configured to store a plurality of program modules. The program modules include a grammar analysis module. The processing device executes the grammar analysis module to analyze the natural language data through a formal grammar model, and generate a plurality of string data. When at least one of the string data conforms to a preset grammar condition, the processing device judges the at least one of the string data is an intention data, and the processing device outputs a corresponding response signal according to the intention data. In addition, a natural language recognizing method is also provided.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: April 28, 2020
    Assignee: VIA Technologies, Inc.
    Inventors: Guo-Feng Zhang, Jing-Jing Guo
  • Patent number: 10630734
    Abstract: A method for managing multiple electronic conference sessions. The method includes a computer processor identifying a plurality of conference sessions that a user is attending, wherein the plurality of conference sessions includes a first session and a second session. The method further includes a computer processor identifying one or more triggers that indicate an occurrence of an event in at least one of the plurality of conference sessions. The method further includes a computer processor determining that the user is an active participant in at least the first session of the plurality of conference sessions that the user is attending. The method further includes a computer processor detecting at least one trigger of the one or more identified triggers, within the second session of the plurality of conference sessions that the user is attending.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: April 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Anjil R. Chinnapatlolla, Casimer M. DeCusatis, Rajaram B. Krishnamurthy, Ajay Sood
  • Patent number: 10628476
    Abstract: An information processing apparatus includes: an analysis unit configured to analyze a text; an obtaining unit configured to obtain term expressions from the text based on a result of the analysis; a classifying structuring unit configured to classify the term expressions based on a usage type of the term expressions; and a presentation unit configured to present a result of the classification based on a unified presentation sequence.
    Type: Grant
    Filed: February 10, 2016
    Date of Patent: April 21, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hidetomo Sohma
  • Patent number: 10628408
    Abstract: Embodiments of the present disclosure provide methods, systems, apparatuses, and computer program products for digital content auditing in a group based communication repository, where the group based communication repository comprises a plurality of enterprise-based digital content objects organized among a plurality of group-based communication channels. In one embodiment, a computing entity or apparatus is configured to receive an enterprise audit request, where the enterprise audit request comprises an audit credential and digital content object retrieval parameters. The apparatus is further configured to determine if the audit credential satisfies an enterprise authentication protocol.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: April 21, 2020
    Assignee: SLACK TECHNOLOGIES, INC.
    Inventors: Brenda Jin, Britton Jamison
  • Patent number: 10621287
    Abstract: A method, system and computer program product for providing translated web content is disclosed. The method includes receiving a request from a user on a web site, the web site having a first web content in a first language, wherein the request calls for a second web content in a second language. The method further includes dividing the first web content into a plurality of translatable components and generating a unique identifier for each translatable component. The method further includes identifying a plurality of translated components of the second web content using the unique identifier of each of the plurality of translatable components of the first web content and putting the plurality of translated components of the second web content to preserve a format that corresponds to the first web content. The method further includes providing the second web content in response to the request that was received.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: April 14, 2020
    Assignee: MOTIONPOINT CORPORATION
    Inventors: Enrique Travieso, Adam Rubenstein
  • Patent number: 10614802
    Abstract: Embodiments of the present disclosure provide a method and a device for recognizing a speech based on a Chinese-English mixed dictionary. The method includes acquiring a Chinese-English mixed dictionary marked by an international phonetic alphabet, in which, the Chinese-English mixed dictionary includes a Chinese dictionary and an English dictionary revised by Chinglish; by taking the Chinese-English mixed dictionary as a training dictionary, taking a one-layer Convolutional Neural Network and a five-layer Long Short-Term Memory as a model, taking syllables or words as a target and taking a connectionist temporal classifier as a training criterion, training the model to obtain a trained CTC acoustic model; and performing a speech recognition on a Chinese-English mixed language based on the trained CTC acoustic model.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: April 7, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Xiangang Li, Xuewei Zhang
  • Patent number: 10606942
    Abstract: Computer-implemented systems and methods for extracting information during a human-to-human mono-lingual or multi-lingual dialog between two speakers are disclosed. Information from either the recognized speech (or the translation thereof) by the second speaker and/or the recognized speech by the first speaker (or the translation thereof) is extracted. The extracted information is then entered into an electronic form stored in a data store.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: March 31, 2020
    Assignee: Facebook, Inc.
    Inventor: Alexander Waibel
  • Patent number: 10599781
    Abstract: An apparatus and method for evaluating quality of an automatic translation is disclosed. An apparatus for evaluating quality of automatic translation includes a converter which converts an automatic translation and a reference translation of an original text to a first distributed representation and a second distributed representation, respectively, using a distributed representation model and a quality evaluator which evaluates quality of automatic translation data based on similarity between the first distributed representation and the second distributed representation.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: March 24, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hwidong Na, Inchul Song, Hoshik Lee
  • Patent number: 10599784
    Abstract: An automated interpretation method includes: interpreting a source voice signal expressed in a first language by dividing the source voice signal into at least one word as a unit while the source voice signal is being input, and outputting, as an interpretation result in real time, a first target voice signal expressed in a second language by each unit; determining whether to re-output the interpretation result; and in response to a determination of the determining of whether to re-output the interpretation being a determination that the interpretation result is to be re-output, interpreting the source voice signal by a sentence as a unit and outputting, as the interpretation result, a second target voice signal expressed in the second language.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: March 24, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sang Hyun Yoo
  • Patent number: 10599785
    Abstract: A system includes a plurality of sound devices, an electronic device having a serial port emulator configured to generate a serial port emulation corresponding to each of the plurality of sound devices, and a computer-readable storage medium having one or more programming instructions. The system receives compressed and encoded sound input from a first sound device via a serial port emulation associated with the first sound device. The sound input is associated with a first language. The system decodes and decompresses the compressed and encoded sound input to generate decompressed and decoded sound input, generates sound output by translating the decompressed and decoded sound input from the first language to a second language, compresses and encodes the sound output to generate compressed and encoded sound output, and transmits the compressed and encoded sound output to a second sound device via a serial port emulation associated with the second sound device.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: March 24, 2020
    Assignee: WAVERLY LABS INC.
    Inventors: William O. Goethals, Jainam Shah, Benjamin J. Carlson
  • Patent number: 10585921
    Abstract: A technique for suggesting patterns to search documents for information of interest includes acquiring a working set of spans for a document set that includes one or more documents. A list of one or more suggested patterns is generated by applying a pattern suggestion algorithm (PSA) to the set of spans for each document in the document set. One or more unique patterns are generated by applying a pattern consolidation algorithm (PCA) to the generated list of suggested patterns. Pattern information for each of the unique patterns is then generated. The pattern information includes a respective first count that corresponds to the number of times each of the unique patterns occurs in the document set.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: March 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dimple Bhatia, Armageddon R. Brown, Yunyao Li, Margaret Zagelow
  • Patent number: 10585922
    Abstract: A computer receives a search query from a user for finding a resource. The computer extracts one or more words from the search query using morphological analysis. The computer assigns at least one first category to at least one first word of the one or more words using a dictionary. In response to identifying an unknown word not in the dictionary within the one or more words, the computer searches for the unknown word on a net.
    Type: Grant
    Filed: May 23, 2018
    Date of Patent: March 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Hiroki Oya