Translation Machine Patents (Class 704/2)
-
Patent number: 11580313Abstract: Systems and methods for profile-based language translation and filtration are provided. A user language profile specifying one or more translation rules may be stored in memory for a user. A current communication session associated with a user device of the user may be monitored. The current communication session may includes messages from one or more other user devices of one or more other users. A language set in at least one of the messages of the current communication session may be detected as triggering at least one of the translation rules in real-time. The language set in the at least one message may further be filtered in real-time based on the at least one translation rule, which may thereby modify the at least one message. Further, a presentation of the current communication session that is provided to the user device may be modified to include the filtered language set of the modified message instead of the triggering language set.Type: GrantFiled: August 25, 2021Date of Patent: February 14, 2023Assignee: SONY INTERACTIVE ENTERTAINMENT LLCInventors: Johannes Muckel, Ellana Fortuna
-
Patent number: 11580310Abstract: A computing system can include one or more machine-learned models configured to receive context data that describes one or more entities to be named. In response to receipt of the context data, the machine-learned model(s) can generate output data that describes one or more names for the entity or entities described by the context data. The computing system can be configured to perform operations including inputting the context data into the machine-learned model(s). The operations can include receiving, as an output of the machine-learned model(s), the output data that describes the name(s) for the entity or entities described by the context data. The operations can include storing at least one name described by the output data.Type: GrantFiled: August 27, 2019Date of Patent: February 14, 2023Assignee: GOOGLE LLCInventors: Victor Carbune, Alexandru-Marian Damian
-
Patent number: 11568155Abstract: There is disclosed a method and system for translating a source phrase in a first language into a second language. The method being executable by a device configured to access an index comprising a set of source sentences in the first language, and a set of target sentences in the second language, each of the target sentence corresponding to a translation of a given source sentence. The method comprises: acquiring the source phrase; generating by a translation algorithm, one or more target phrases, each of the one or more target phrases having a different semantic meaning within the second language; retrieving, from the index, a respective target sentence for each of the one or more target phrases, the respective target sentence comprising one of the one or more target phrases; and selecting each of the one or more target phrase and the respective target sentences for display.Type: GrantFiled: May 27, 2020Date of Patent: January 31, 2023Assignee: YANDEX EUROPE AGInventors: Anton Aleksandrovich Dvorkovich, Ekaterina Vladimirovna Enikeeva
-
Patent number: 11556781Abstract: In an approach to determining the effectiveness of a proposed solution, one or more computer processors monitor real-time communications. The one or more computer processors identify or more topics associated with the monitored real-time communications. The one or more computer processors feed the identified one or more topics and associated real-time communications into a solution efficacy model. The one or more computer processors generate based on one or more calculations by the solution efficacy model, an efficacy rating for the identified real-time communications. The one or more computer processors generate a prioritization of the identified real-time communications based on the generated efficacy rating.Type: GrantFiled: July 25, 2019Date of Patent: January 17, 2023Assignee: International Business Machines CorporationInventors: Trudy L. Hewitt, Kelley Anders, Jonathan D. Dunne, Lisa Ann Cassidy, Jeremy R. Fox, Pauric Grant
-
Patent number: 11554315Abstract: A method implemented by a processor of a computing device, comprising: receiving an image from a camera; using a machine vision process to recognize the at least one real-world object in the image; displaying on a screen an augmented reality (AR) scene containing the at least one real-world object and a virtual agent; receiving user input; deriving a simplified user intent from the user input; and in response to the user input, animating the virtual agent within the AR scene, the animating being dependent on the simplified user intent. Deriving a simplified user intent from the user input may include converting the user input into a user phrase, determining at least one semantic element in the user phrase, and converting the at least one semantic element into the simplified user intent.Type: GrantFiled: August 27, 2019Date of Patent: January 17, 2023Assignee: SQUARE ENIX LTD.Inventors: Anthony Reddan, Renaud Bédard
-
Patent number: 11551002Abstract: Systems and methods for automatic evaluation of the quality of NLG outputs. In some aspects of the technology, a learned evaluation model may be pretrained first using NLG model pretraining tasks, and then with further pretraining tasks using automatically generated synthetic sentence pairs. In some cases, following pretraining, the evaluation model may be further fine-tuned using a set of human-graded sentence pairs, so that it learns to approximate the grades allocated by the human evaluators.Type: GrantFiled: August 26, 2020Date of Patent: January 10, 2023Assignee: GOOGLE LLCInventors: Thibault Sellam, Dipanjan Das, Ankur Parikh
-
Patent number: 11551675Abstract: An electronic device is provided. The electronic device includes a memory configured to store a speech translation model and at least one processor electronically connected with the memory. The at least one processor is configured to train the speech translation model based on first information related to conversion between a speech in a first language and a text corresponding to the speech in the first language, and second information related to conversion between a text in the first language and a text in a second language corresponding to the text in the first language, and the speech translation model is trained to convert a speech in the first language into a text in the second language and output the text.Type: GrantFiled: June 12, 2020Date of Patent: January 10, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Sathish Reddy Indurthi, Hyojung Han, Beomseok Lee, Insoo Chung, Nikhil Kumar Lakumarapu
-
Patent number: 11537800Abstract: A pharmacy management system for automated sig code translation using machine learning includes a processor and a memory storing instructions that, when executed by the one or more processors, cause the pharmacy management system to train a machine learning model to analyze sig codes, receive a sig code, analyze the sig code utterance and generate an output corresponding to the sig code utterance. A computer-implemented method includes training a machine learning model to analyze sig codes, receiving a sig code, analyzing the sig code utterance, and generating an output corresponding to the sig code utterance. A non-transitory computer readable medium containing program instructions that when executed, cause a computer to: train a machine learning model to analyze sig codes, receive a sig code, analyze the sig code utterance, and generate an output corresponding to the sig code utterance.Type: GrantFiled: May 8, 2020Date of Patent: December 27, 2022Assignee: WALGREEN CO.Inventor: Oliver Derza
-
Patent number: 11540054Abstract: An auxiliary device charging case is used to facilitate translation features of a mobile computing device or auxiliary device. A first user, who may be a foreign language speaker, holds the charging case and speaks into the charging case. The charging case communicates the received speech to the mobile computing device, either directly or through the auxiliary device, which translates the received speech into a second language for a second user, who is the owner of the mobile computing device and auxiliary device. The second user may provide input in the second language, such as by speaking or typing into the auxiliary or mobile computing device. The mobile computing device may translate this second input to the first language, and transmit the translated input to the charging case either directly or through the auxiliary device. The charging case may output the translated second input to the first user, such as through a speaker or display screen.Type: GrantFiled: January 2, 2019Date of Patent: December 27, 2022Assignee: Google LLCInventors: Maksim Shmukler, Adam Champy, Dmitry Svetlov, Jeffrey Kuramoto
-
Patent number: 11538481Abstract: An apparatus includes at least one processor to, in response to a request to perform speech-to-text conversion: perform a pause detection technique including analyzing speech audio to identify pauses, and analyzing lengths of the pauses to identify likely sentence pauses; perform a speaker diarization technique including dividing the speech audio into fragments, analyzing vocal characteristics of speech sounds of each fragment to identify a speaker of a set of speakers, and identifying instances of a change in speakers between each temporally consecutive pair of fragments to identify likely speaker changes; and perform speech-to-text operations including dividing the speech audio into segments based on at least the likely sentence pauses and likely speaker changes, using at least an acoustic model with each segment to identify likely speech sounds in the speech audio, and generating a transcript of the speech audio based at least on the likely speech sounds.Type: GrantFiled: June 28, 2022Date of Patent: December 27, 2022Assignee: SAS INSTITUTE INC.Inventors: Xiaolong Li, Samuel Norris Henderson, Xiaozhuo Cheng, Xu Yang
-
Patent number: 11531824Abstract: A machine accesses a query in a first natural language. The machine identifies an event corresponding to the query. The machine computes, using a cross-lingual information retrieval module, a ranked list of documents in a second natural language that are related to the event. At least a portion of documents in the ranked list are selected from a collection of documents in the second natural language that are not annotated with events. The cross-lingual information retrieval module is trained using a dataset comprising annotated documents in the first natural language and translations of the annotated documents into the second natural language. Each annotated document is annotated with one or more events. The machine provides an output representing at least a portion of the ranked list of documents in the second natural language. The second natural language is different from the first natural language.Type: GrantFiled: May 17, 2019Date of Patent: December 20, 2022Assignee: Raytheon BBN Technologies Corp.Inventors: Bonan Min, Rabih Zbib, Zhongqiang Huang
-
Patent number: 11514672Abstract: Provided are methods, systems, and devices for generating semantic objects and an output based on the detection or recognition of the state of an environment that includes objects. State data, based in part on sensor output, can be received from one or more sensors that detect a state of an environment including objects. Based in part on the state data, semantic objects are generated. The semantic objects can correspond to the objects and include a set of attributes. Based in part on the set of attributes of the semantic objects, one or more operating modes, associated with the semantic objects can be determined. Based in part on the one or more operating modes, object outputs associated with the semantic objects can be generated. The object outputs can include one or more visual indications or one or more audio indications.Type: GrantFiled: May 21, 2020Date of Patent: November 29, 2022Assignee: GOOGLE LLCInventors: Tim Wantland, Donald A. Barnett, David Matthew Jones
-
Patent number: 11507760Abstract: The accuracy of machine translation is increased. A translated document with high translation accuracy is obtained. An original document is faithfully translated. An original document is translated with a neural network to generate a first translated document; a modification-target word or phrase is determined from words and phrases contained in the original document on the basis of an analysis result for the first translated document; the modification-target word or phrase is replaced with a high frequency word in learning data used for learning in the neural network to modify the original document; and the modified original document is translated with the neural network to generate a second translated document.Type: GrantFiled: May 28, 2020Date of Patent: November 22, 2022Assignee: Semiconductor Energy Laboratory Co., Ltd.Inventors: Ayami Hara, Junpei Momo, Tatsuya Okano
-
Patent number: 11501090Abstract: A method for remote communication based on a real-time translation service according to an embodiment of the present disclosure, as a method for providing remote communication based on a real-time translation service by a real-time translation application executed by at least one or more processors of a computing device, comprises performing augmented reality-based remote communication; setting an initial value of a translation function for the remote communication; obtaining communication data of other users through the remote communication; performing language detection for the obtained communication data; when a target translation language is detected within the communication data from the performed language detection, translating communication data of the target translation language detected; and providing the translated communication data.Type: GrantFiled: December 29, 2021Date of Patent: November 15, 2022Assignee: VIRNECT INC.Inventors: Tae Jin Ha, Chang Kil Jeon
-
Patent number: 11481563Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods that can generate contextual identifiers indicating context for frames of a video and utilize those contextual identifiers to generate translations of text corresponding to such video frames. By analyzing a digital video file, the disclosed systems can identify video frames corresponding to a scene and a term sequence corresponding to a subset of the video frames. Based on images features of the video frames corresponding to the scene, the disclosed systems can utilize a contextual neural network to generate a contextual identifier (e.g. a contextual tag) indicating context for the video frames. Based on the contextual identifier, the disclosed systems can subsequently apply a translation neural network to generate a translation of the term sequence from a source language to a target language. In some cases, the translation neural network also generates affinity scores for the translation.Type: GrantFiled: November 8, 2019Date of Patent: October 25, 2022Assignee: Adobe Inc.Inventors: Mahika Wason, Amol Jindal, Ajay Bedi
-
Patent number: 11475882Abstract: Methods and systems for training a language processing model. The methods may involve receiving a first log record in a first format, wherein the first log record includes annotations describing items in the first log record, and then creating a second log record in a second format comprising data from the first log record utilizing the annotations in the first log record and a conversion rule set. The second log record may then be used to train a language processing model so that a trained model can identify items in a third log record and the relationships therebetween.Type: GrantFiled: June 27, 2019Date of Patent: October 18, 2022Assignee: Rapid7, Inc.Inventor: Wah-Kwan Lin
-
Patent number: 11468227Abstract: In one embodiment, the disclosure provides a computer-implemented or programmed method, comprising: causing subscribing to a plurality of events provided by a first application programming interface; receiving a layout change event pushed from the first application programming interface; determining that a change in focused element resulted in a currently focused element; receiving, from the currently focused element, a digital electronic object comprising a source text; programmatically dividing the source text into a plurality of source text units; programmatically evaluating each particular source text unit among the plurality of source text units using a machine learning model, and receiving a classification output from the machine learning model; programmatically transforming the classification output to yield an output set of phrase suggestions; and, causing displaying one or more phrase suggestions of the output set of phrase suggestions.Type: GrantFiled: November 12, 2021Date of Patent: October 11, 2022Assignee: GRAMMARLY, INC.Inventors: Oleksiy Shevchenko, Victor Pavlychko, Valentyn Gaidylo, Nikita Volobuiev, Ievgen Rysai, Roman Guliak, Yura Tanskiy
-
Patent number: 11468105Abstract: Systems for processing queries may first determine correspondence between the parameters of the query and a set of existing data entries, a set of previous queries that have been received, or both the existing data entries and the previous queries. If the query parameters do not correspond to the data entries or pervious queries, correspondence is determined between the query parameters and group data that associates at least a subset the query parameters with a particular group that may generate a response to the query. The same group or the generated response may be used when similar queries are received. If the group transmits the query to a different group or if negative user feedback is received, the group data may be modified to indicate the different group or to remove the association with the initial group that received the query.Type: GrantFiled: March 10, 2020Date of Patent: October 11, 2022Assignee: Okta, Inc.Inventors: Pratyus Patnaik, Marissa Mary Montgomery, Jay Srinivasan, Suchit Agarwal, Rajhans Samdani, David Colby Kaneda, Nathaniel Ackerman Rook
-
Patent number: 11468247Abstract: The present disclosure provides an artificial intelligence apparatus which inputs first language data into a machine translation model to economically train a natural language understanding model of a second language and obtains second language data corresponding to the first language data to train the natural language understanding model.Type: GrantFiled: January 14, 2020Date of Patent: October 11, 2022Assignee: LG ELECTRONICS INC.Inventor: Jaehwan Lee
-
Patent number: 11461396Abstract: This disclosure relates to method of extracting an information associated with design of formulated products and representing as a graph. A graph domain model of a plurality of vertices, and at least one formulation text as text file are received as an input. The information extraction is applied to identify at least one sentence and extract at least one subject-verb-object triple from every sentence of the at least one formulation text. A sentence including an ingredient listing and associated weights indicated by presence of weight numerals, and a sentence including at least one verb from the at least one subject-verb-object based on the graph domain model are classified. A representation of the recipe text is generated in terms of at least one action, ingredients on which the at least one action is performed, and condition. An insert query string is generated and executed to store the formulations as the graph.Type: GrantFiled: December 17, 2020Date of Patent: October 4, 2022Assignee: Tata Consultancy Services LimitedInventors: Sagar Sunkle, Deepak Jain, Krati Saxena, Ashwini Patil, Rinu Chacko, Beena Rai, Vinay Kulkarni
-
Patent number: 11455146Abstract: Aspects of the disclosure relate to generating a pseudo-code from a text summarization based on a convolutional neural network. A computing platform may receive, by a computing device, a first document comprising text in a natural language different from English. Subsequently, the computing platform may translate, based on a neural machine translation model, the first document to a second document comprising text in English. Then, the computing platform may generate an attention-based convolutional neural network (CNN) for the second document. Then, the computing platform may extract, by applying the attention-based CNN, an abstractive summary of the second document. Subsequently, the computing platform may generate, based on the abstractive summary, a flowchart. Then, the computing platform may generate, based on the flowchart, a pseudo-code. Subsequently, the computing platform may display, via an interactive graphical user interface, the flowchart, and the pseudo-code.Type: GrantFiled: June 22, 2020Date of Patent: September 27, 2022Assignee: Bank of America CorporationInventors: MadhuMathi Rajesh, MadhuSudhanan Krishnamoorthy
-
Patent number: 11455543Abstract: Examples of a textual entailment generation system are provided. The system obtains a query from a user and implements an artificial intelligence component to identify a premise, a word index, and a premise index associated with the query. The system may implement a first cognitive learning operation to determine a plurality of hypothesis and a hypothesis index corresponding to the premise. The system may generate a confidence index for each of the plurality of hypothesis based on a comparison of the hypothesis index with the premise index. The system may determine an entailment value, a contradiction value, and a neutral entailment value based on the confidence index for each of the plurality of hypothesis. The system may generate an entailment result relevant for resolving the query comprising the plurality of hypothesis along with the corresponding entailed output index.Type: GrantFiled: October 15, 2019Date of Patent: September 27, 2022Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITEDInventors: Shaun Cyprian D'Souza, Ashutosh Pandey, Binit Kumar Bhagat, Vijay Apparao Patil, Chaitanya Teegala, Sangeetha Basavaraj, Eldhose Joy, Nikita Ramesh Rao, Harsha Jawagal
-
Patent number: 11456001Abstract: Disclosed are a method of encoding a high band of an audio, a method of decoding a high band of an audio, and an encoder and a decoder for performing the methods. The method of decoding a high band of an audio, the method performed by a decoder, includes identifying a parameter extracted through a first neural network, identifying side information extracted through a second neural network, and restoring a high band of an audio by applying the parameter and the side information to a third neural network.Type: GrantFiled: March 10, 2020Date of Patent: September 27, 2022Assignees: Electronics and Telecommunications Research Institute, KWANGWOON UNIVERSITY INDUSTRY-ACADEMIC COLLABORATION FOUNDATIONInventors: Seung Kwon Beack, Jongmo Sung, Mi Suk Lee, Tae Jin Lee, Hochong Park
-
Patent number: 11449686Abstract: Systems, devices, and methods are provided for using an automated assessment and evaluation of machine translations. A system may receive a request associated with translating first content from a first language to a second language. The system may translate the first content from the first language to the second language. The system may determine an attribute associated with the first content. The system may determine a translation score associated with second content translated from the first language to the second language and associated with the attribute, the translation score indicative of a machine translation accuracy. The system may determine, and based on the translation score, a translation protocol, and may execute the translation protocol.Type: GrantFiled: July 9, 2019Date of Patent: September 20, 2022Assignee: Amazon Technologies, Inc.Inventors: Brinda Panchal, Abhinav Agarwal, Sandy Barnabas, You Ling, Pavel Fomitchov, Nagaraja Vasudevamurthy, Kaivan Wadia, Emmanuel Addy Lamptey, Meng Kang
-
Patent number: 11443101Abstract: An embodiment for extracting information from semi-structured text is provided. The embodiment may include identifying one or more high confidence alignments of one or more entities and identifiers in a set of documents. The embodiment may also include analyzing one or more blocks of semi-structured text containing the one or more entities and identifiers. The embodiment may further include identifying one or more known alignments in each of the one or more blocks of semi-structured text. The embodiment may also include generating a structure template. The embodiment may further include applying the structure template to each of the one or more blocks of semi-structured text. The embodiment may also include annotating the set of documents with metadata reflecting the structure template and a location of each of the one or more blocks of semi-structured text.Type: GrantFiled: November 3, 2020Date of Patent: September 13, 2022Assignee: International Business Machine CorporationInventors: Christopher F. Ackermann, Charles E. Beller, Michael Drzewucki
-
Patent number: 11436419Abstract: A bilingual corpora screening method includes: acquiring multiple pairs of bilingual corpora, wherein each pair of the bilingual corpora comprises a source corpus and a target corpus; training a machine translation model based on the multiple pairs of bilingual corpora; obtaining a first feature of each pair of bilingual corpora based on the trained machine translation model; training a language model based on the multiple pairs of bilingual corpora; obtaining feature vectors of each pair of bilingual corpora and determining a second feature of each pair of bilingual corpora based on the trained language model; determining a quality value of each pair of bilingual corpora according to the first feature and the second feature of each pair of bilingual corpora; and screening each pair of bilingual corpora according to the quality value of each pair of bilingual corpora.Type: GrantFiled: June 3, 2020Date of Patent: September 6, 2022Assignee: Beijing Xiaomi Mobile Software Co., Ltd.Inventors: Jingwei Li, Yuhui Sun, Xiang Li
-
Patent number: 11430425Abstract: Computer generated speech can be generated for cross-lingual natural language textual data streams by utilizing a universal phoneme set. In a variety of implementations, the natural language textual data stream includes a primary language portion in a primary language and a secondary language portion that is not in the primary language. Phonemes corresponding to the secondary language portion can be determined from a set of phonemes in a universal data set. These phonemes can be mapped back to a set of phonemes for the primary language. Audio data can be generated for these phonemes to pronounce the secondary language portion of the natural language textual data stream utilizing phonemes associated with the primary language.Type: GrantFiled: October 11, 2018Date of Patent: August 30, 2022Assignee: GOOGLE LLCInventors: Ami Patel, Siamak Tazari
-
Patent number: 11403470Abstract: A storage unit stores a target term, a substitute term, a substitute translated term, and a representative term. The substitute translated term is a translation of the substitute term and is expressed in a second language. The representative term indicates a type of the target term and is expressed in the second language. A communication unit acquires a provisional translation that is a translation of a processed sentence from a first external device that has a translation function. When the storage unit does not store a target translated term that is a translation of the target term, a controller replaces the substitute translated term contained in the provisional translation with the representative term to generate a second display-purpose translated sentence, and then causes a display unit to display the second display-purpose translated sentence.Type: GrantFiled: October 11, 2019Date of Patent: August 2, 2022Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventor: Tomokazu Ishikawa
-
Patent number: 11403078Abstract: The layout of network-based interfaces can be defined in markup language files rendered in browsers executed on client devices. Interference problems among interface elements in such interfaces can be detected using the tools and processes described herein. The text nodes in a markup language file can be parsed out for processing. A number of pseudo characters or strings can be inserted into the text nodes to mimic the expansion that might occur if the plaintext in the text nodes was translated into a different language. The positions of those text nodes can then be determined and evaluated for interference with each other. Additionally or alternatively, the text nodes can be machine translated to a different language. In turn, the markup file including the translated text nodes can be rendered to evaluate whether the translated text nodes interfere with each other using optical character recognition, for example.Type: GrantFiled: October 21, 2016Date of Patent: August 2, 2022Assignee: VMWARE, INC.Inventors: Rongbo Peng, Demin Yan
-
Patent number: 11397855Abstract: A method for generating data standardization rules includes receiving a training data set containing tokenized and tagged data values. A set of machine mining models is built using different learning algorithms for identifying tags and tag patterns using the training set. For each data value in a further data set: a tokenization is applied on the data value, resulting in a set of tokens. For each token of the set of tokens one or more tag candidates are determined using a lookup dictionary of tags and tokens and/or at least part of the set of machine mining models, resulting for each token of the set of tokens in a list of possible tags. Unique combinations of the sets of tags of the further data set having highest aggregated confidence values are provided for use as standardization rules.Type: GrantFiled: December 12, 2017Date of Patent: July 26, 2022Assignee: International Business Machines CorporationInventors: Yannick Saillet, Martin Oberhofer, Namit Kabra
-
Patent number: 11392770Abstract: The disclosure herein describes a system and method for attentive sentence similarity scoring. A distilled sentence embedding (DSE) language model is trained by decoupling a transformer language model using knowledge distillation. The trained DSE language model calculates sentence embeddings for a plurality of candidate sentences for sentence similarity comparisons. An embedding component associated with the trained DSE language model generates a plurality of candidate sentence representations representing each candidate sentence in the plurality of candidate sentences which are stored for use in analyzing input sentences associated with queries or searches. A representation is created for the selected sentence. This selected sentence representation is used with the plurality of candidate sentence representations to create a similarity score for each candidate sentence-selected sentence pair.Type: GrantFiled: February 12, 2020Date of Patent: July 19, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Oren Barkan, Noam Razin, Noam Koenigstein
-
Patent number: 11392779Abstract: A bilingual corpora screening method includes: acquiring multiple pairs of bilingual corpora, wherein each pair of the bilingual corpora comprises a source corpus and a target corpus; training a machine translation model based on the multiple pairs of bilingual corpora; obtaining a first feature of each pair of bilingual corpora based on the trained machine translation model; training a language model based on the multiple pairs of bilingual corpora; obtaining feature vectors of each pair of bilingual corpora and determining a second feature of each pair of bilingual corpora based on the trained language model; determining a quality value of each pair of bilingual corpora according to the first feature and the second feature of each pair of bilingual corpora; and screening each pair of bilingual corpora according to the quality value of each pair of bilingual corpora.Type: GrantFiled: June 3, 2020Date of Patent: July 19, 2022Assignee: Beijing Xiaomi Mobile Software Co., Ltd.Inventors: Jingwei Li, Yuhui Sun, Xiang Li
-
Patent number: 11394783Abstract: A content driven service discovery and agent monitoring capabilities on Managed Endpoints methodology and system is disclosed. In a computer-implemented method, content information corresponding to an agent of a monitoring system is generated. Content information is pushed to the agent. The content information is used to alter the agent such that an altered agent is generated. The altered agent is generated without requiring a complete update of the agent.Type: GrantFiled: August 14, 2019Date of Patent: July 19, 2022Assignee: VMware, Inc.Inventors: V Vimal Das Kammath, Zacharia George, Narendra Madanapalli, Rahav Vembuli, Aditya Sushilendra Kolhar
-
Patent number: 11392853Abstract: Logic may adjust communications between customers. Logic may cluster customers into a first group associated with a first subset of synonyms and a second group associated with a second subset of the synonyms. Logic may associate a first tag with the first group and with each of the synonyms of the first subset. Logic may associate a second tag with the second group and with each of the synonyms of the second subset. Logic may associate one or more models with pairs of the groups. A first pair may comprise the first group and the second group. The first model associated with the first pair may adjust words in communications between the first group and the second group, based on the synonyms associated with the first pair, by replacement of words in a communication between customers of the first subset and customers of the second subset.Type: GrantFiled: February 27, 2019Date of Patent: July 19, 2022Assignee: Capital One Services, LLCInventors: Fardin Abdi Taghi Abad, Austin Grant Walters, Jeremy Edward Goodsitt, Reza Farivar, Vincent Pham, Anh Truong
-
Patent number: 11385916Abstract: A database may contain text strings in a preferred language and in one or more other languages. One or more processors may be configured to: generate a graphical user interface containing the text strings in the preferred language and in the other languages, and a control for dynamic translation, wherein a first set of the text strings in the other languages are displayed within text input controls, and wherein a second set of the text strings in the other languages are not displayed within the text input controls; receive an activation indication of the control for dynamic translation; and generate an update to the graphical user interface that includes translations of the first set into the preferred language appearing adjacent to the first set in the other languages, and also translations of the second set into the preferred language replacing the second set in the other languages.Type: GrantFiled: March 16, 2020Date of Patent: July 12, 2022Assignee: ServiceNow, Inc.Inventors: Jebakumar Mathuram Santhosm Swvigaradoss, Ankit Goel, Prashant Pandey, John Alan Botica, Rajesh Voleti, Laxmi Prasanna Mustala
-
Patent number: 11373048Abstract: In an embodiment, a method includes receiving, by a server, a request from a user for translation of a multi-format embedded file (MFEF) document, where the MFEF document includes a plurality of embedded elements. Responsive to the request for the MFEF document, the server automatically identifies element types of the embedded elements, where the embedded elements include a first element type and a second element type. The server also identifies foreign-language embedded elements from among the plurality of embedded elements, where the foreign-language embedded elements are embedded elements that are in a language other than a preferred language of the user. The server selects translators to translate the foreign-language embedded elements, including selecting a first translator for translating the first element type of foreign-language embedded elements and a second translator for translating the second element type of foreign-language embedded elements.Type: GrantFiled: September 11, 2019Date of Patent: June 28, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Su Liu, Yang Liang, Denise Marie Genty, Fan Yang
-
Patent number: 11361168Abstract: Systems and methods are described herein for replaying content dialogue in an alternate language in response to a user command. While the content is playing on a media device, a first language in which the content dialogue is spoken is identified. Upon receiving a voice command to repeat a portion of the dialogue, the language in which the command was spoken is identified. The portion of the content dialogue to repeat is identified and translated from the first language to the second language. The translated portion of the content dialogue is then output. In this way, the user can simply ask in their native language for the dialogue to be repeated and the repeated portion of the dialogue is presented in the user's native language.Type: GrantFiled: October 16, 2018Date of Patent: June 14, 2022Assignee: Rovi Guides, Inc.Inventors: Carla Mack, Phillip Teich, Mario Sanchez, John Blake
-
Patent number: 11356757Abstract: A wearable audio device including first and second transducer modules is provided. The first transducer module may include a first transducer. The first transducer module may further include a first enclosure having a top side defining a first slit and a bottom side defining a second slit. The first enclosure may be configured to guide a first sound pressure through the first slit, and a second sound pressure through the second slit. The second transducer module may include a second transducer. The second transducer module may include a second enclosure having a top side defining a third slit and a bottom side defining a fourth slit. The second enclosure may be configured to guide a third sound pressure through the third slit, and a fourth sound pressure through the fourth slit. The third and fourths slit may be arranged diagonally opposite the first and second slits, respectively.Type: GrantFiled: March 8, 2021Date of Patent: June 7, 2022Assignee: Bose CorporationInventors: Cedrik Bacon, Liam Kelly, Ryan Struzik, Michael J. Daley, Jonathan Zonenshine
-
Patent number: 11354518Abstract: Embodiments of the invention provide a method, system and computer program product for model localization. In an embodiment of the invention, a method for model localization includes parsing a model to identify translatable terms, generating a seed file associating each of the translatable terms with a corresponding tag and replacing each translatable term in the model with a corresponding tag and submitting each of the translatable terms to machine translation for a target language to produce a different translation file mapping each tag from the seed file with a translated term in the target language of a corresponding one of the translatable terms. Then, the model may be deployed in a data analytics application using the different translation file to dynamically translate each translatable term into a corresponding translated term within a user interface to the data analytics application.Type: GrantFiled: March 20, 2020Date of Patent: June 7, 2022Assignee: Google LLCInventors: Andrew Leahy, Steven Talbot
-
Patent number: 11356795Abstract: An audio system, method, and computer program product which includes a wearable audio device and a mobile peripheral device. Each device is capable of determining its respective absolute or relative position and orientation. Once the relative positions and orientations between the devices are known, virtual sound sources are generated at fixed positions and orientations relative to the peripheral device such that any change in position and/or orientation of the peripheral device produces a proportional change in the position and/or orientation of the virtual sound sources. Additionally, first order and second order reflected audio paths may be simulated for each virtual sound source to increase the realism of the simulated sources. Each sound path can be produced by modifying the original audio signal using head-related transfer functions (HRTFs) to simulate audio as though it were perceived by the user's left and right ears as coming from each virtual sound source.Type: GrantFiled: June 17, 2020Date of Patent: June 7, 2022Assignee: Bose CorporationInventors: Eric J. Freeman, David Avi Dick, Wade P. Torres, Daniel R. Tengelsen, Eric Raczka Bernstein
-
Patent number: 11347938Abstract: Disclosed herein is a translation platform making use of both machine translation and crowd sourced manual translation. Translation is performed on pages in an application. Manual translations are applied immediately to local versions of the client application and are either human reviewed or reverse machine translated and compared against the original text. Once verified, the translations are applied to all end-clients.Type: GrantFiled: September 30, 2020Date of Patent: May 31, 2022Assignee: FinancialForce.com, Inc.Inventors: Daniel Christian Brown, Stephen Paul Willcock, Andrew Craddock, Luke McMahon, Peter George Wright
-
Patent number: 11328130Abstract: The present disclosure is directed to systems, methods and devices for providing real-time translation for group communications. A speech input may be received from a first group communication device associated with a first language. One or more groups to distribute the speech input may be determined, wherein each of the one or more groups comprises at least one group communication device associated with a language that is different than the first language. The received speech input may be translated into a corresponding language for each of the one or more groups, and the translated speech may be sent to each group communication device of the one or more groups in a language corresponding to each of the one or more groups.Type: GrantFiled: November 6, 2018Date of Patent: May 10, 2022Assignee: Orion Labs, Inc.Inventors: Justin Black, Gregory Albrecht, Dan Phung
-
Patent number: 11328129Abstract: Based on a candidate set of translations produced by a neural network based machine learning model, a mapping data structure such as a statistical phrase table is generated. The mapping data structure is analyzed to obtain a quality metric of the neural network based model. One or more operations are initiated based on the quality metric.Type: GrantFiled: August 14, 2020Date of Patent: May 10, 2022Assignee: Amazon Technologies, Inc.Inventors: Hagen Fuerstenau, Felix Hieber
-
Patent number: 11328463Abstract: A system and method for creating customized, personally relevant greeting cards, e-cards, mementos, invitations, decorations, and other types of printed and virtual media to convey affection, friendship, emotional connections, celebration, gratitude, condolence and other types of sentiments to relatives, friends, coworkers, business associates, and acquaintances regardless of the cultural backgrounds of the giver and the recipient.Type: GrantFiled: October 21, 2016Date of Patent: May 10, 2022Assignee: KODAK ALARIS, INC.Inventors: Young No, Joseph A. Manico
-
Patent number: 11328132Abstract: A translation-engine suggestion method, system, and computer program product include identifying probes for third-party translation-engines for an input text, segmenting sections of the input text into a plurality of segments according to the identified probes, fragmenting the input text into fragments according to the segments, applying each fragment to the identified probe using the corresponding third-party translation-engine, and outputting a translation by combining each fragment.Type: GrantFiled: September 9, 2019Date of Patent: May 10, 2022Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sekou Lionel Remy, Charles Muchiri Wachira, Fiona Mugure Matu, Samuel Osebe, Victor Abayomi Akinwande, William Ogallo
-
Patent number: 11328133Abstract: The present disclosure provides a translation processing method, a translation processing device, and a device. The first speech signal of the first language is obtained, and the speech feature vector of the first speech signal is extracted based on the preset algorithm. Further, the speech feature vector is input into the pre-trained end-to-end translation model for conversion from the first language speech to the second language text for processing, and the text information of the second language corresponding to the first speech signal is obtained. Moreover, speech synthesis is performed on the text information of the second language, and the corresponding second speech signal is obtained and played.Type: GrantFiled: September 27, 2019Date of Patent: May 10, 2022Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Hao Xiong, Zhongjun He, Xiaoguang Hu, Hua Wu, Zhi Li, Zhou Xin, Tian Wu, Haifeng Wang
-
Patent number: 11321530Abstract: A method includes obtaining a string of words and determining whether two or more words of the string of words are in a word group. When the two or more words are in the word group, the method further includes retrieving a set of word group identigens for the word group and retrieving sets of word identigens for remaining words of the string of words. The method further includes determining whether a word group identigen of the set of word group identigens and word identigens of the sets of word identigens creates an entigen group that is a valid interpretation of the string of words. When the entigen group is the valid interpretation of the string of words, the method further includes outputting the entigen group.Type: GrantFiled: April 16, 2019Date of Patent: May 3, 2022Assignee: entigenlogic LLCInventors: Frank John Williams, David Ralph Lazzara, Donald Joseph Wurzel, Paige Kristen Thompson, Stephen Emerson Sundberg, Stephen Chen, Karl Olaf Knutson, Jessy Thomas, David Michael Corns, II, Andrew Chu, Eric Andrew Faurie, Theodore Mazurkiewicz, Gary W. Grube
-
Patent number: 11321542Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for language modeling. In one aspect, a system comprises: a masked convolutional decoder neural network that comprises a plurality of masked convolutional neural network layers and is configured to generate a respective probability distribution over a set of possible target embeddings at each of a plurality of time steps; and a modeling engine that is configured to use the respective probability distribution generated by the decoder neural network at each of the plurality of time steps to estimate a probability that a string represented by the target embeddings corresponding to the plurality of time steps belongs to the natural language.Type: GrantFiled: July 13, 2020Date of Patent: May 3, 2022Assignee: DeepMind Technologies LimitedInventors: Nal Emmerich Kalchbrenner, Karen Simonyan, Lasse Espeholt
-
Patent number: 11314951Abstract: Provided is an artificial intelligence (AI) system which simulates the functions of a human brain, such as recognition, judgement, etc., by using a machine learning algorithm, such as deep learning, and applications thereof.Type: GrantFiled: June 23, 2017Date of Patent: April 26, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sang-ha Kim, Eun-kyoung Kim, Ji-sang Yu, Jong-youb Ryu, Jae-Won Lee
-
Patent number: 11308289Abstract: Method and apparatus are presented for receiving a medical or medical condition related input term or phrase in a source language, and translating the term or phrase from the source language into at least one target language to obtain a set of translated terms of the input term. For each translated term in the set of translations, the method and apparatus further translate the set of translations back into the source language to obtain an output list of standard versions of the input term, scoring each entry of the output list as to probability of being the most standard version of the input term, and providing the entry of the output list that has the highest score to a user.Type: GrantFiled: September 13, 2019Date of Patent: April 19, 2022Assignee: International Business Machines CorporationInventors: Jian Min Jiang, Jian Wang, Songfang Huang, Jing Li, Ke Wang