Having Particular Input/output Device Patents (Class 704/3)
  • Patent number: 12147780
    Abstract: A messaging system (400) comprises a user interface (404) coupled with a processor (402) and a memory (406), to display text messages; a translation selection module (414) coupled with the processor (402) and the user interface (404) to: associate a translation input selection button (203) with each of the text messages; send, upon selection translation input selection button (203), the associated text message data with language code to a messaging system server (104) for translation; receive the translated text message data from the messaging system server (104); a display module (416) coupled with the processor (402) to display the translated text message retrieve from the translated text message data with the associated translation input selection button (203) in place of the text message on the user interface (404, 201).
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: November 19, 2024
    Assignee: DAAKIA PRIVATE LIMITED
    Inventor: Bhawana Mitra
  • Patent number: 12131261
    Abstract: A property vector representing extractable measurable properties, such as musical properties, of a file is mapped to semantic properties for the file. This is achieved by using artificial neural networks “ANNs” in which weights and biases are trained to align a distance dissimilarity measure in property space for pairwise comparative files back towards a corresponding semantic distance dissimilarity measure in semantic space for those same files. The result is that, once optimised, the ANNs can process any file, parsed with those properties, to identify other files sharing common traits reflective of emotional-perception, thereby rendering a more liable and true-to-life result of similarity/dissimilarity. This contrasts with simply training a neural network to consider extractable measurable properties that, in isolation, do not provide a reliable contextual relationship into the real-world.
    Type: Grant
    Filed: May 5, 2023
    Date of Patent: October 29, 2024
    Assignee: EMOTIONAL PERCEPTION AI LIMITED
    Inventors: Joseph Michael William Lyske, Nadine Kroher, Angelos Pikrakis
  • Patent number: 12118580
    Abstract: Disclosed are a method, a system, and a non-transitory computer-readable record medium for a reward on a cryptocurrency exchange. A cryptocurrency reward method includes determining a reward amount for a user that meets a reward condition in a cryptocurrency exchange, the reward amount being in a fiat currency, setting at least one cryptocurrency as at least one selected cryptocurrency based on a selection of the user, converting the reward amount into the at least one selected cryptocurrency to obtain at least one converted reward amount, and transferring the at least one converted reward amount to an account of the user.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: October 15, 2024
    Assignee: LY CORPORATION
    Inventors: Yooho Kim, Jisun Ha, Hyeonji Kim
  • Patent number: 12055719
    Abstract: A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
    Type: Grant
    Filed: July 24, 2023
    Date of Patent: August 6, 2024
    Assignee: Magic Leap, Inc.
    Inventors: Eric Browy, Michael Janusz Woods, Andrew Rabinovich
  • Patent number: 12052392
    Abstract: For detecting and resolving bad audio during conferencing, methods, apparatus, and systems are disclosed. One apparatus includes a processor and a memory that stores code executable by the processor. The processor detects bad audio for a conference call, the conference call involving a plurality of participants. The processor switches a first input stream to an analysis mode, where the bad audio corresponds to a first one of a plurality of input streams, the first input stream associated with a first participant. The processor sends a conference output channel to the first participant while in the analysis mode and concurrently analyzes the first input stream using a plurality of audio tools while in the analysis mode. The processor returns the first input stream to a conferencing mode in response to resolving the bad audio.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: July 30, 2024
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Joshua Smith, Matthew Fardig, Tobias Christensen, Sathish Kumar Ganesan
  • Patent number: 12032924
    Abstract: An improved translation experience is provided using an auxiliary device, such as a pair of earbuds, and a wirelessly coupled mobile device. Microphones on both the auxiliary device and the mobile device simultaneously capture input from, respectively, a primary user (e.g., wearing the auxiliary device) and a secondary user (e.g., a foreign language speaker providing speech that the primary user desires to translate). Both microphones continually listen, rather than alternating between the mobile device and the auxiliary device. Each device may determine when to endpoint and send a block of speech for translation, for example based on pauses in the speech. Each device may accordingly send the received speech for translation and output, such that it is provided in a natural flow of communication.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: July 9, 2024
    Assignee: GOOGLE LLC
    Inventor: Deric Cheng
  • Patent number: 11973611
    Abstract: Methods, systems, and program products are disclosed for determining location of audio transmission problems between components connected across a network. A method includes receiving audio and a first transcription of the audio from a source device, generating a second transcription of the received audio. The method also includes providing an indication of an audio problem responsive to the first transcription not matching the second transcription and sending the audio and the second transcription to an audio mixing device, a recording device, or a participant device responsive to the first transcription matching the second transcription.
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: April 30, 2024
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Joshua Smith, Carl H. Seaver, Matthew Fardig, Inna Zolin
  • Patent number: 11961505
    Abstract: Methods and devices for identifying language level are provided. A first automatic speech recognition (ASR) module is identified, from among a plurality of ASR modules, based on information on a target received at the electronic device. First voice data and first image data for the target are received. The first voice data and the first image data are converted to first text data using the first ASR module. A first language level of the target is identified based on the first text data. Data including at least one of a voice output and an image output is output based on the first language level satisfying a condition.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Taegu Kim
  • Patent number: 11928439
    Abstract: A translation method is provided, including: encoding to-be-processed text information to obtain a source vector representation sequence, the to-be-processed text information belonging to a first language; obtaining a source context vector corresponding to a first instance according to the source vector representation sequence, the source context vector indicating to-be-processed source content in the to-be-processed text information at the first instance; determining a translation vector according to the source vector representation sequence and the source context vector; and decoding the translation vector and the source context vector, to obtain target information of the first instance, the target information belonging to a second language.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 12, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zhaopeng Tu, Hao Zhou, Shuming Shi
  • Patent number: 11907975
    Abstract: A feedback button is configured to receive a feedback expression characterizing a user experience and submit information in a user interface with a single action. When the user hovers an input cursor near the feedback button, icons/text may be displayed that indicate different types of feedback expressions that may be submitted. As the user moves the cursor back and forth over the button, the icons for the different feedback expressions may be emphasized indicating locations where the button may be clicked to submit specific feedback. When the user is satisfied with the displayed feedback expression, the user may click the feedback button at the current location to submit the feedback expression and perform the final submit action for other data in the user interface.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: February 20, 2024
    Assignee: Oracle International Corporation
    Inventors: Lynn Ann Rampoldi-Hnilo, Hillel Noah Cooperman, Mahlon Connor Houk
  • Patent number: 11848010
    Abstract: Systems for analyzing and categorizing audio content that has been transcribed into text are provided. The systems include at least one machine that has a central processing unit, random access memory, a correlation module, a feature abstraction module, and at least one database. The correlation module is configured to receive written transcripts (each of which has been generated from audio content) and derive a correlation between each written transcript and one or more attributes. The feature abstraction module is configured to receive instructions that identify specific words within the written transcripts; replace the specific words with surrogate words; and associate correlative meanings with each of the surrogate words. The database is configured to receive, record, and make accessible to the feature abstraction module a table of specific words, each of which is associated with corresponding surrogate words and correlative meanings associated with each of the surrogate words.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: December 19, 2023
    Inventors: Walter Bachtiger, Bruce Ramsay
  • Patent number: 11797780
    Abstract: A method includes receiving a set of text documents. The method also includes generating a summary of the set of text documents by a set of large language machine learning models. The method further includes generating a set of keywords from the summary by the set of large language machine learning models. The method additionally includes generating an image prompt from the set of keywords by the set of large language machine learning models. The method also includes generating a set of images from the image prompt by a text-to-image machine learning model. The method further includes generating a video clip from the set of images. The method additionally includes presenting the video clip.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: October 24, 2023
    Assignee: Intuit Inc.
    Inventors: Corinne Finegan, Richard Becker, Sanuree Gomes
  • Patent number: 11734521
    Abstract: A method includes: a bidirectional translation model to be trained and training data are acquired, the training data including a source corpus and a target corpus corresponding to the source corpus; the bidirectional translation model is trained for N cycles, each cycle of training including a forward translation process of translating the source corpus into a pseudo target corpus and a reverse translation process of translating the pseudo target corpus into a pseudo source corpus and N being a positive integer greater than 1; a forward translation similarity and a reverse translation similarity are acquired; and when a sum of the forward translation similarity and the reverse translation similarity converges, it is determined that training of the bidirectional translation model is completed, where the training completed bidirectional translation model is used to perform translating.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 22, 2023
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Jialiang Jiang, Xiang Li, Jianwei Cui
  • Patent number: 11687736
    Abstract: Systems and methods may be used to provide transcription and translation services. A method may include initializing a plurality of user devices with respective language output selections in a translation group by receiving a shared identifier from the plurality of user devices and transcribing the audio stream to transcribed text. The method may include translating the transcribed text to one or more of the respective language output selections when an original language of the transcribed text differs from the one or more of the respective language output selections. The method may include sending, a user device in the translation group, the transcribed text including translated text in a language corresponding to the respective language output selection for the user device. In an example, the method may include customizing the transcription or the translation, such as to a particular topic, location, user, or the like.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: June 27, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: William D. Lewis, Ivo José Garcia Dos Santos, Tanvi Surti, Arul A. Menezes, Olivier Nano, Christian Wendt, Xuedong Huang
  • Patent number: 11680814
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing augmented reality content corresponding to a translation in association with travel. The program and method provide for receiving, by a messaging application running on a device of a user, a request to scan an image captured by a device camera; obtaining, in response to receiving the request, a travel parameter associated with the request, and an attribute of an object depicted in the image; determining, based on the travel parameter and the attribute, to perform a translation with respect to the object; performing, in response to the determining, the translation with respect to the object; and displaying an augmented reality content item, which includes the translation, with the image.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: June 20, 2023
    Assignee: Snap Inc.
    Inventors: Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
  • Patent number: 11645946
    Abstract: A language learning system is provided and includes multilingual content in both text and audio versions, a means for correlating the multilingual content with a translation of the text and audio version, and a computing device. The computing device includes a general user interface permitting a user to choose a specific subset of the multilingual content and a central processing unit to translate native language of the specific subset to a selected language translation.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 9, 2023
    Assignee: Zing Technologies Inc.
    Inventors: Michael John Mathias, Kyra Zinaida Pahlen
  • Patent number: 11640812
    Abstract: An augmented reality display system included in a vehicle generates an augmented reality display, on one or more transparent surfaces of the vehicle. The augmented reality display can include an indicator of the vehicle speed which is spatially positioned according to the speed of the vehicle relative to the local speed limit. The augmented reality display can include display elements which conform to environmental objects and can obscure and replace content displayed on the objects. The augmented reality display can include display elements which indicate a position of environmental objects which are obscured from direct perception through the transparent surface. The augmented reality display can include display elements which simulate one or more particular environmental objects in the environment, based on monitoring manual driving performance of the vehicle by a driver. The augmented reality display can include display elements which identify environmental objects and particular zones in the environment.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: May 2, 2023
    Assignee: Apple Inc.
    Inventors: Kjell F. Bronder, Scott M. Herz, Karlin Y. Bark
  • Patent number: 11626100
    Abstract: An information processing apparatus includes a controller that is configured to identify a first language into which a content of a speech that is input is to be translated, based on first information about a place, estimate an intention of the content of the speech based on the content of the speech that is translated into the first language, select a service to be provided, based on the intention that is estimated, and provide a guide related to the service that is selected, in a language of the speech. The first language is different from the language of the speech.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: April 11, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Takashige Hori, Kouji Nishiyama
  • Patent number: 11494568
    Abstract: Systems and methods include acquisition of a plurality of text segments, each of the text segments associated with a flag value indicating whether the text segment is associated with a correct replacement text or an incorrect replacement text, determination of one or more n-grams of each text segment of the plurality of text segments, generation, based on the one or more n-grams of each text segment and the flag value associated with each text segment, a model to determine a flag value based on one or more input n-grams, reception of an input text segment, determination of a second one or more n-grams of the input text segment, determination, using the model, of an output flag value based on the determined second one or more n-grams, and presentation of the input text segment and the output flag value on a display.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: November 8, 2022
    Assignee: SAP SE
    Inventors: Lauritz Brandt, Marcus Danei, Benjamin Schork
  • Patent number: 11468075
    Abstract: A data analytic system for conducting automated analytics of content within a network-based system. The data analytic system features query management logic that, responsive to a triggering event, initiates queries for retrieval of particular type of content to be verified. The data analytic system further features multi-stage statistical analysis logic, automated intelligence and reporting logic. The statistical analysis logic is configured to concurrently conduct a plurality of statistical analyses on the content and generate corresponding plurality of statistical results, apply weightings to each of the statistical results, perform arithmetic operation(s) on the weighted statistical results to produce an analytic result, and determine whether the analytic result signifies that the content constitutes errored content.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: October 11, 2022
    Assignee: Autodata Solutions, Inc.
    Inventors: Joseph Regnier, Andrew Keyes
  • Patent number: 11461549
    Abstract: The present disclosure discloses a method and an apparatus for generating a text based on a semantic representation and relates to a field of natural language processing (NLP) technologies. The method for generating the text includes: obtaining an input text, the input text comprising a source text; obtaining a placeholder of an ith word to be predicted in a target text; obtaining a vector representation of the ith word to be predicted, in which the vector representation of the ith word to be predicted is obtained by calculating the placeholder of the ith word to be predicted, the source text and 1st to (i?1)th predicted words by employing a self-attention mechanism; and generating an ith predicted word based on the vector representation of the ith word to be predicted, to obtain a target text.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 4, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
  • Patent number: 11461559
    Abstract: Techniques and structures to facilitate conversion of a workflow process is disclosed. The techniques include receiving an image, identifying one or more objects included in the image, identifying one or more properties associated with each of the one or more objects, generating a matrix including data including the identified objects and associated properties and processing the matrix at a machine learning model to determine whether the image is to be translated based on a determination that one or more objects and associated properties within the image are required to be translated.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: October 4, 2022
    Assignee: salesforce.com, inc.
    Inventor: Amit Gupta
  • Patent number: 11437042
    Abstract: A communication robot capable of communicating with other electronic devices and an external server in a 5G communication environment by performing artificial intelligence (AI) algorithms and/or machine learning algorithms to be loaded and performing a speech recognition, and a driving method thereof are disclosed. The method for driving a communication robot according to an exemplary embodiment of the present disclosure may include receiving an utterance speech uttered by a user who has approached within a predetermined distance from the communication robot, and selecting any one ASR module capable of processing the uttered speech among plural ASR modules as an optimized ASR module. According to the present disclosure, it is possible to improve user's satisfaction with the use of the communication robot by reducing the inconvenience that the user has to manually set a first language in the preprocessing operation in order to receive a service from the communication robot.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: September 6, 2022
    Assignee: LG Electronics Inc.
    Inventor: Gyeong Hun Kim
  • Patent number: 11416509
    Abstract: In some aspects, a computing system can receive, from a client device, a request to perform an analytical operation that involves a query regarding a common entity type. The computing system can extract a query parameter having a particular standardized entity descriptor for the common entity type and parse a transformed dataset that is indexed in accordance with standardized entity descriptors. The computing system can match the particular standardized entity descriptor from the query to records from the transformed dataset having index values with the particular standardized entity descriptor. The computing system can retrieve the subset of the transformed dataset having the index values with the particular standardized entity descriptor. In some aspects, the computing system can generate the transformed dataset by performing conversion operations that transform records in a data structure by converting a set of different entity descriptors into a standardized entity descriptor for the common entity type.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: August 16, 2022
    Assignee: EQUIFAX INC.
    Inventors: Piyushkumar Patel, Rajkumar Bondugula
  • Patent number: 11379670
    Abstract: A computerized method is disclosed including operations of receiving one or more request texts, including at least a first request text, automatically performing processing on the first request text to determine a most similar request text in a knowledge base, determining a degree of similarity between the first request text and the most similar request text, and in response to a comparison between the degree of similarity and a similarity threshold, retrieving, from the knowledge base, an answer corresponding to the most similar request text. Performing processing may include (i) removing stop words and punctuation and creating tokenized text, (ii) converting the tokenized text into a vector using a trained neural network, and (iii) performing an analysis of the vector with the entries of the knowledge base using one or more of a Word Mover's Distance (WMD) algorithm, or a Soft Cosine Measure (SCM) algorithm.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 5, 2022
    Assignee: Splunk, Inc.
    Inventors: Ningwei Liu, Wangyan Feng, Aaron Chan, Joel Fulton
  • Patent number: 11354485
    Abstract: In some embodiments, a method can include generating a resume document image having a standardized format, based on a resume document having a set of paragraphs. The method can further include executing a statistical model to generate an annotated resume document image from the resume document image. The annotated resume document image can indicate a bounding box and a paragraph type, for a paragraph from a set of paragraphs of the annotated resume document image. The method can further include identifying a block of text in the resume document corresponding to the paragraph of the annotated resume document image. The method can further include extracting the block of text from the resume document and associating the paragraph type to the block of text.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: June 7, 2022
    Assignee: iCIMS, Inc.
    Inventors: Eoin O'Gorman, Adrian Mihai
  • Patent number: 11347966
    Abstract: Artificial intelligence for machine learning to provide an optimized response sentence in reply to an input sentence.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: May 31, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Donghyeon Lee, Heriberto Cuayahuitl, Seonghan Ryu, Sungja Choi
  • Patent number: 11335348
    Abstract: The disclosure relates to a method, device, apparatus, and storage medium. The method includes recognizing voice data inputted by a user; obtaining a voice text corresponding to the voice data; obtaining, based on the voice text, a text to-be-input corresponding to the voice data, wherein the text to-be-input includes a plurality of words constituting a phrase or a sentence; and displaying the text to-be-input in an input textbox of an input interface.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: May 17, 2022
    Assignee: Beijing Xiaomi Mobile Software Co., Ltd.
    Inventors: Jiefu Tan, Senhua Chen, Dan Li, Xinyi Ren
  • Patent number: 11321675
    Abstract: Disclosed embodiments provide a computer-implemented technique for monitoring deviation from a meeting agenda. A meeting moderator and meeting agenda are obtained. Meeting dialog, along with facial expressions and/or body language of attendees is monitored. Natural language processing, using entity detection, disambiguation, and other language processing techniques, determines a level of deviation in the meeting dialog from the meeting agenda. Computer-implemented image analysis techniques ascertain participant engagement from facial expressions and/or gestures of participants. A deviation alert is presented to the moderator and/or meeting participants when a deviation is detected, allowing the moderator to steer the meeting conversation back to the planned agenda.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ira Allen, Blaine H. Dolph
  • Patent number: 11281465
    Abstract: A non-transitory computer readable recording medium has stored thereon instructions to be executed on a computer providing terminal device with a game. The recording medium includes, for example, a main program described with Japanese text data, and language data in which English text data is associated with identification information (hash value). The instructions cause the computer to perform the steps of: setting a language to be displayed on a display section; generating a retrieval key by performing data processing on the first data to be displayed that is included in the main program when the second language is set as a language to be displayed; and extracting the second data to be displayed that includes the identification information corresponding to the generated key, and replacing the first data to be displayed with the second data to be displayed to display the second data to be displayed on the display section.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: March 22, 2022
    Assignee: GREE, Inc.
    Inventors: Wataru Sakamoto, Ryosuke Nishida
  • Patent number: 11263409
    Abstract: A sign language translation system may capture infrared images of the formation of a sign language sign or sequence of signs. The captured infrared images may be used to produce skeletal joints data that includes a temporal sequence of 3D coordinates of skeletal joints of hands and forearms that produced the sign language sign(s). A hierarchical bidirectional recurrent neural network may be used to translate the skeletal joints data into a word or sentence of a spoken language. End-to-end sentence translation may be performed using a probabilistic connectionist temporal classification based approach that may not require pre-segmentation of the sequence of signs or post-processing of the translated sentence.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: March 1, 2022
    Assignee: BOARD OF TRUSTEES OF MICHIGAN STATE UNIVERSITY
    Inventors: Mi Zhang, Biyi Fang
  • Patent number: 11264035
    Abstract: A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: March 1, 2022
    Assignee: Starkey Laboratories, Inc.
    Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Clifford Anthony Tallman
  • Patent number: 11258872
    Abstract: Systems and methods are described herein for accelerating page rending times. In some embodiments, an intermediary proxy device may be utilized to accelerate page rending times experienced at an application operating at a user device. The intermediary proxy device may perform operations such as obtaining, on behalf of the application, first webpage content and second webpage content for a webpage, the first webpage content being obtained from the web server. In some embodiments, the intermediate proxy device may execute third-party code to obtain the second webpage content from a third party. In some embodiments, the intermediary proxy device may modify the second webpage content obtained from the content server from a first format to a second format the first webpage content and the second webpage content as modified to the application. The application may then perform one or more operations to present data at a display of the user device.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: February 22, 2022
    Assignee: Amazon Technologie, Inc.
    Inventor: Edward Dean Gogel
  • Patent number: 11240390
    Abstract: A server apparatus, a voice operation system, and a voice operation method, each of which: acquires, from an image forming apparatus, a language type of a display language used for display at the image forming apparatus; stores the language type of the display language; acquires, from a speaker, voice operation that instructs to change the display language; identifies a language type of a targeted language based on the voice operation; and determines whether the language type of the display language matches the language type of the targeted language. Based on a determination that the language type of the display language does not match the language type of the targeted language, the server apparatus instructs the image forming apparatus to change from the language type of the display language to the language type of the targeted language.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: February 1, 2022
    Assignee: Ricoh Company, Ltd.
    Inventor: Hajime Kubota
  • Patent number: 11227116
    Abstract: A translation device includes: a controller that extracts a proper noun candidate from an original sentence, generates a translation word of the proper noun candidate in a second language, generates a second translated sentence by translating the original sentence into the second language based on the proper noun candidate and the translation word of the proper noun candidate, and generates a second reverse-translated sentence by translating the second translated sentence into the first language based on the proper noun candidate and the translation word of the proper noun candidate; a display that displays the first reverse-translated sentence and the second reverse-translated sentence; and an operation unit that receives a user operation of selecting one of the first reverse-translated sentence and the second reverse-translated sentence.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: January 18, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: He Cai
  • Patent number: 11216621
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting text from an input document to generate one or more inference. Each inference box may be input into a machine learning network trained on training labels. Each training label provides a human-augmented version of output from a separate machine translation engine. A first translation may be generated by machine learning network. The first translation may be displayed in a user interface with respect to display of an original version of the input document and a translated version of a portion of the input document.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: January 4, 2022
    Assignee: Vannevar Labs, Inc.
    Inventors: Daniel Goodman, Nathanial Hartman, Nathaniel Bush, Brett Granberg
  • Patent number: 11217224
    Abstract: A multilingual text-to-speech synthesis method and system are disclosed. The method includes receiving first learning data including a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, receiving second learning data including a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, and generating a single artificial neural network text-to-speech synthesis model by learning similarity information between phonemes of the first language and phonemes of the second language based on the first learning data and the second learning data.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: January 4, 2022
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Patent number: 11205055
    Abstract: A management device is connected to an apparatus and configured to manage the apparatus. The management device includes a multi-language display processing unit configured to, when an input unit receives a change request to change a language of messages to be displayed on a display unit, transmit standard language data to a translation device, and a translated data reception unit configured to acquire translated data translated into a language corresponding to a language environment of a mobile terminal on the translation device with reference to the standard language data. The multi-language display processing unit is configured to change a language of messages to be displayed on the display unit from a default language to a language corresponding to the language environment of the mobile terminal by using the translated data.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: December 21, 2021
    Assignee: Mitsubishi Electric Corporation
    Inventor: Hiroaki Obana
  • Patent number: 11200510
    Abstract: A mechanism is provided for text classifier training. The mechanism receives a training set of text and class specification pairs to be used as a ground truth for training a text classifier machine learning model for a text classifier. Each text and class specification pair comprises a text and a corresponding class specification. A domain terms selector component identifies at least one domain term in the texts of the training set. A domain terms replacer component replaces the at least one identified domain term in the texts of the training set with a corresponding replacement term to form a revised set of text and class specification pairs. A text classifier trainer component trains the text classifier machine learning model using the revised set to form a trained text classifier machine learning model.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: December 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: John M. Boyer, Kshitij P. Fadnis, Dinesh Raghu
  • Patent number: 11188714
    Abstract: An electronic apparatus includes a voice receiving unit, a display unit, and a control unit. The control unit is configured to perform control so as to identify the language of a voice input received by the voice receiving unit. In a case where it is determined the identified language, which is a first language, is different from a second language set as a primary language in the electronic apparatus, the control unit is configured to display on the display unit, a message for confirming whether to change the primary language from the second language to the first language in both the first language and the second language.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: November 30, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Shimpei Kitai
  • Patent number: 11190851
    Abstract: Various embodiments provide media based on a detected language being spoken. In one embodiment, the system electronically detects which language of a plurality of languages is being spoken by a user, such during a conversation or while giving a voice command to the television. Based on which language of a plurality of languages is being spoken by the user, the system electronically presents media to the user that is in the detected language. For example, the media may be television channels and/or programs that are in the detected language and/or a program guide, such as a pop-up menu, including such media that are in the detected language.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: November 30, 2021
    Assignee: SLING MEDIA PVT. LTD.
    Inventor: Rajesh Palaniswami
  • Patent number: 11163954
    Abstract: Aspects of the invention include systems and methods for the propagation of annotation metadata to overlapping annotations of a synonymous type. A non-limiting example computer-implemented method includes performing a comparison of a set of annotations to detect a subset of annotations that are candidates of being synonymous based on a first analysis. Whether a first annotation of the subset of annotations is synonymous with a second annotation of the subset of annotations is determined based on a second analysis. Distinct annotation metadata of the first annotation are cross-propogated with annotation metadata of the second annotation based on the second analysis.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: November 2, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Scott Carrier, Brendan Bull, Dwi Sianto Mansjur, Paul Lewis Felt
  • Patent number: 11151335
    Abstract: A machine translation method includes using an encoder of a source language to determine a feature vector from a source sentence expressed in the source language, using an attention model of a target language to determine context information of the source sentence from the determined feature vector, and using a decoder of the target language to determine a target sentence expressed in the target language from the determined context information.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: October 19, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hwidong Na
  • Patent number: 11137976
    Abstract: To provide audio information regarding locations within a geographic area, a client device provides an interactive three-dimensional (3D) display of panoramic street level imagery for a geographic area via a user interface. The panoramic street level imagery includes one or more landmarks. The client device receives a request for audio information describing a selected landmark within the interactive 3D display, and obtains the audio information describing the selected landmark from a server device in response to the request. Then the client device automatically presents the received audio information describing the selected landmark.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: October 5, 2021
    Assignee: GOOGLE LLC
    Inventors: Michael Edgcumbe, Rachel Inman, Kasey Klimes, Anna Roth
  • Patent number: 11132515
    Abstract: A method and a data processing device are disclosed for at least partially automatically transferring a word sequence composed in a source language into a word sequence in a target language with corresponding substantive content. By analyzing the word sequence and identifying terms with lexical ambiguity in the word sequence by comparing with a terminology database comprising terms with lexical ambiguity in the source language which are assigned a plurality of term identifiers depending on their number of meanings, an unambiguous term definition is provided for translating the word sequence into the target language by assigning a term identifier to the term with lexical ambiguity in the source language. This may render a machine translation less susceptible to errors.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: September 28, 2021
    Assignee: CLAAS Selbstfahrende Erntemaschinen GmbH
    Inventor: Ute Rummel
  • Patent number: 11115355
    Abstract: An information display method, and apparatus, and devices are provided. The method includes providing an information editing interface; receiving a first type of information input by a user in the information editing interface; obtaining a second type of information corresponding to the first type of information, wherein the second type of information is translation information of the first type of information; and displaying the second type of information in the information editing interface. By adopting the technical solutions of the present disclosure, when the user inputs information in the information editing interface, the information input by the user may be synchronously translated, the translated information is displayed to the user by the information editing interface, so during the process when the user inputs the information, the translation information of the then-input information may be seen in real time, and the translation information may be modified, thereby improving user experience.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: September 7, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Zekun Yan, Yufeng Wang, Yuan Li, You Wu, Qiang Li
  • Patent number: 11100928
    Abstract: A configuration is implemented to establish, with a processor, an interactive voice response system that is operable in a first human-spoken language. Further, the configuration receives, with the processor, a communication request through a designated communication channel for a second human-spoken language. The second human-spoken language is distinct from the first human-spoken language. Moreover, the configuration generates, with the processor, a simulated interactive voice response system that provides a service in the second human-spoken language. The simulated interactive voice response system routes a request in the second human-spoken language to a machine interpreter that translates the request into the first human-spoken language. The translated request is provided to the interactive voice response system to process the request in the first human-spoken language.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: August 24, 2021
    Assignee: Language Line Services, Inc.
    Inventors: Jeffrey Cordell, James Boutcher
  • Patent number: 11074413
    Abstract: Computer-based implementations of context-sensitive salient keyword unit surfacing for multi-language comments are disclosed. A set of target keyword units in a target written language are caused by a computing system to be presented in a graphical user interface such as, for example, as part of a tag cloud or the like. The set of target keyword units are determined by the system by a context-sensitive mapping of a set of source keyword units in an intermediate written language to the set of target keyword units. The context sensitive mapping is constructed based on in-context machine translation of survey comments in the target language to the intermediate language and then identifying translation keyword unit pairs in the target language survey comments and the translated survey comments that represent a mapping of the in-context translation of a keyword unit in the target language to a keyword unit in the intermediate language.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: July 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiaoran Zhang, Goutham Kurra, Chih-Po Wen, Shane Combest
  • Patent number: 11069341
    Abstract: The speech correction system includes a storage device, an audio receiver and a processing device. The processing device includes a speech recognition engine and a determination module. The storage device is configured to store a database. The audio receiver is configured to receive an audio signal. The speech recognition engine is configured to identify a key speech pattern in the audio signal and generate a candidate vocabulary list and a transcode corresponding to the key speech pattern; wherein the candidate vocabulary list includes a candidate vocabulary corresponding to the key speech pattern and a vocabulary score corresponding to the candidate vocabulary. The determination module is configured to determine whether the vocabulary score is greater than a score threshold. If the vocabulary score is greater than the score threshold, the determination module stores the candidate vocabulary corresponding to the vocabulary score in the database.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: July 20, 2021
    Assignee: QUANTA COMPUTER INC.
    Inventors: Yi-Ling Chen, Chih-Wei Sung, Yu-Cheng Chien, Kuan-Chung Chen
  • Patent number: 11032676
    Abstract: A method and system for transliteration of a textual message are provided. The method includes receiving, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determining if the first character set is supported by the second mobile device; determining a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterating the textual message to the second character set; and sending the transliterated textual message to a second network texting element.
    Type: Grant
    Filed: September 10, 2013
    Date of Patent: June 8, 2021
    Assignee: VascoDe Technologies Ltd.
    Inventors: Dorron Mottes, Gil Zaidman, Arnon Yaar