Having Particular Input/output Device Patents (Class 704/3)
  • Patent number: 11973611
    Abstract: Methods, systems, and program products are disclosed for determining location of audio transmission problems between components connected across a network. A method includes receiving audio and a first transcription of the audio from a source device, generating a second transcription of the received audio. The method also includes providing an indication of an audio problem responsive to the first transcription not matching the second transcription and sending the audio and the second transcription to an audio mixing device, a recording device, or a participant device responsive to the first transcription matching the second transcription.
    Type: Grant
    Filed: July 14, 2022
    Date of Patent: April 30, 2024
    Assignee: Lenovo (Singapore) Pte. Ltd.
    Inventors: Joshua Smith, Carl H. Seaver, Matthew Fardig, Inna Zolin
  • Patent number: 11961505
    Abstract: Methods and devices for identifying language level are provided. A first automatic speech recognition (ASR) module is identified, from among a plurality of ASR modules, based on information on a target received at the electronic device. First voice data and first image data for the target are received. The first voice data and the first image data are converted to first text data using the first ASR module. A first language level of the target is identified based on the first text data. Data including at least one of a voice output and an image output is output based on the first language level satisfying a condition.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: April 16, 2024
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Taegu Kim
  • Patent number: 11928439
    Abstract: A translation method is provided, including: encoding to-be-processed text information to obtain a source vector representation sequence, the to-be-processed text information belonging to a first language; obtaining a source context vector corresponding to a first instance according to the source vector representation sequence, the source context vector indicating to-be-processed source content in the to-be-processed text information at the first instance; determining a translation vector according to the source vector representation sequence and the source context vector; and decoding the translation vector and the source context vector, to obtain target information of the first instance, the target information belonging to a second language.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 12, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Zhaopeng Tu, Hao Zhou, Shuming Shi
  • Patent number: 11907975
    Abstract: A feedback button is configured to receive a feedback expression characterizing a user experience and submit information in a user interface with a single action. When the user hovers an input cursor near the feedback button, icons/text may be displayed that indicate different types of feedback expressions that may be submitted. As the user moves the cursor back and forth over the button, the icons for the different feedback expressions may be emphasized indicating locations where the button may be clicked to submit specific feedback. When the user is satisfied with the displayed feedback expression, the user may click the feedback button at the current location to submit the feedback expression and perform the final submit action for other data in the user interface.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: February 20, 2024
    Assignee: Oracle International Corporation
    Inventors: Lynn Ann Rampoldi-Hnilo, Hillel Noah Cooperman, Mahlon Connor Houk
  • Patent number: 11848010
    Abstract: Systems for analyzing and categorizing audio content that has been transcribed into text are provided. The systems include at least one machine that has a central processing unit, random access memory, a correlation module, a feature abstraction module, and at least one database. The correlation module is configured to receive written transcripts (each of which has been generated from audio content) and derive a correlation between each written transcript and one or more attributes. The feature abstraction module is configured to receive instructions that identify specific words within the written transcripts; replace the specific words with surrogate words; and associate correlative meanings with each of the surrogate words. The database is configured to receive, record, and make accessible to the feature abstraction module a table of specific words, each of which is associated with corresponding surrogate words and correlative meanings associated with each of the surrogate words.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: December 19, 2023
    Inventors: Walter Bachtiger, Bruce Ramsay
  • Patent number: 11797780
    Abstract: A method includes receiving a set of text documents. The method also includes generating a summary of the set of text documents by a set of large language machine learning models. The method further includes generating a set of keywords from the summary by the set of large language machine learning models. The method additionally includes generating an image prompt from the set of keywords by the set of large language machine learning models. The method also includes generating a set of images from the image prompt by a text-to-image machine learning model. The method further includes generating a video clip from the set of images. The method additionally includes presenting the video clip.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: October 24, 2023
    Assignee: Intuit Inc.
    Inventors: Corinne Finegan, Richard Becker, Sanuree Gomes
  • Patent number: 11734521
    Abstract: A method includes: a bidirectional translation model to be trained and training data are acquired, the training data including a source corpus and a target corpus corresponding to the source corpus; the bidirectional translation model is trained for N cycles, each cycle of training including a forward translation process of translating the source corpus into a pseudo target corpus and a reverse translation process of translating the pseudo target corpus into a pseudo source corpus and N being a positive integer greater than 1; a forward translation similarity and a reverse translation similarity are acquired; and when a sum of the forward translation similarity and the reverse translation similarity converges, it is determined that training of the bidirectional translation model is completed, where the training completed bidirectional translation model is used to perform translating.
    Type: Grant
    Filed: May 8, 2020
    Date of Patent: August 22, 2023
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Jialiang Jiang, Xiang Li, Jianwei Cui
  • Patent number: 11687736
    Abstract: Systems and methods may be used to provide transcription and translation services. A method may include initializing a plurality of user devices with respective language output selections in a translation group by receiving a shared identifier from the plurality of user devices and transcribing the audio stream to transcribed text. The method may include translating the transcribed text to one or more of the respective language output selections when an original language of the transcribed text differs from the one or more of the respective language output selections. The method may include sending, a user device in the translation group, the transcribed text including translated text in a language corresponding to the respective language output selection for the user device. In an example, the method may include customizing the transcription or the translation, such as to a particular topic, location, user, or the like.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: June 27, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: William D. Lewis, Ivo José Garcia Dos Santos, Tanvi Surti, Arul A. Menezes, Olivier Nano, Christian Wendt, Xuedong Huang
  • Patent number: 11680814
    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing augmented reality content corresponding to a translation in association with travel. The program and method provide for receiving, by a messaging application running on a device of a user, a request to scan an image captured by a device camera; obtaining, in response to receiving the request, a travel parameter associated with the request, and an attribute of an object depicted in the image; determining, based on the travel parameter and the attribute, to perform a translation with respect to the object; performing, in response to the determining, the translation with respect to the object; and displaying an augmented reality content item, which includes the translation, with the image.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: June 20, 2023
    Assignee: Snap Inc.
    Inventors: Virginia Drummond, Jean Luo, Alek Matthiessen, Celia Nicole Mourkogiannis
  • Patent number: 11645946
    Abstract: A language learning system is provided and includes multilingual content in both text and audio versions, a means for correlating the multilingual content with a translation of the text and audio version, and a computing device. The computing device includes a general user interface permitting a user to choose a specific subset of the multilingual content and a central processing unit to translate native language of the specific subset to a selected language translation.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: May 9, 2023
    Assignee: Zing Technologies Inc.
    Inventors: Michael John Mathias, Kyra Zinaida Pahlen
  • Patent number: 11640812
    Abstract: An augmented reality display system included in a vehicle generates an augmented reality display, on one or more transparent surfaces of the vehicle. The augmented reality display can include an indicator of the vehicle speed which is spatially positioned according to the speed of the vehicle relative to the local speed limit. The augmented reality display can include display elements which conform to environmental objects and can obscure and replace content displayed on the objects. The augmented reality display can include display elements which indicate a position of environmental objects which are obscured from direct perception through the transparent surface. The augmented reality display can include display elements which simulate one or more particular environmental objects in the environment, based on monitoring manual driving performance of the vehicle by a driver. The augmented reality display can include display elements which identify environmental objects and particular zones in the environment.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: May 2, 2023
    Assignee: Apple Inc.
    Inventors: Kjell F. Bronder, Scott M. Herz, Karlin Y. Bark
  • Patent number: 11626100
    Abstract: An information processing apparatus includes a controller that is configured to identify a first language into which a content of a speech that is input is to be translated, based on first information about a place, estimate an intention of the content of the speech based on the content of the speech that is translated into the first language, select a service to be provided, based on the intention that is estimated, and provide a guide related to the service that is selected, in a language of the speech. The first language is different from the language of the speech.
    Type: Grant
    Filed: February 1, 2021
    Date of Patent: April 11, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Takashige Hori, Kouji Nishiyama
  • Patent number: 11494568
    Abstract: Systems and methods include acquisition of a plurality of text segments, each of the text segments associated with a flag value indicating whether the text segment is associated with a correct replacement text or an incorrect replacement text, determination of one or more n-grams of each text segment of the plurality of text segments, generation, based on the one or more n-grams of each text segment and the flag value associated with each text segment, a model to determine a flag value based on one or more input n-grams, reception of an input text segment, determination of a second one or more n-grams of the input text segment, determination, using the model, of an output flag value based on the determined second one or more n-grams, and presentation of the input text segment and the output flag value on a display.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: November 8, 2022
    Assignee: SAP SE
    Inventors: Lauritz Brandt, Marcus Danei, Benjamin Schork
  • Patent number: 11468075
    Abstract: A data analytic system for conducting automated analytics of content within a network-based system. The data analytic system features query management logic that, responsive to a triggering event, initiates queries for retrieval of particular type of content to be verified. The data analytic system further features multi-stage statistical analysis logic, automated intelligence and reporting logic. The statistical analysis logic is configured to concurrently conduct a plurality of statistical analyses on the content and generate corresponding plurality of statistical results, apply weightings to each of the statistical results, perform arithmetic operation(s) on the weighted statistical results to produce an analytic result, and determine whether the analytic result signifies that the content constitutes errored content.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: October 11, 2022
    Assignee: Autodata Solutions, Inc.
    Inventors: Joseph Regnier, Andrew Keyes
  • Patent number: 11461559
    Abstract: Techniques and structures to facilitate conversion of a workflow process is disclosed. The techniques include receiving an image, identifying one or more objects included in the image, identifying one or more properties associated with each of the one or more objects, generating a matrix including data including the identified objects and associated properties and processing the matrix at a machine learning model to determine whether the image is to be translated based on a determination that one or more objects and associated properties within the image are required to be translated.
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: October 4, 2022
    Assignee: salesforce.com, inc.
    Inventor: Amit Gupta
  • Patent number: 11461549
    Abstract: The present disclosure discloses a method and an apparatus for generating a text based on a semantic representation and relates to a field of natural language processing (NLP) technologies. The method for generating the text includes: obtaining an input text, the input text comprising a source text; obtaining a placeholder of an ith word to be predicted in a target text; obtaining a vector representation of the ith word to be predicted, in which the vector representation of the ith word to be predicted is obtained by calculating the placeholder of the ith word to be predicted, the source text and 1st to (i?1)th predicted words by employing a self-attention mechanism; and generating an ith predicted word based on the vector representation of the ith word to be predicted, to obtain a target text.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: October 4, 2022
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Han Zhang, Dongling Xiao, Yukun Li, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
  • Patent number: 11437042
    Abstract: A communication robot capable of communicating with other electronic devices and an external server in a 5G communication environment by performing artificial intelligence (AI) algorithms and/or machine learning algorithms to be loaded and performing a speech recognition, and a driving method thereof are disclosed. The method for driving a communication robot according to an exemplary embodiment of the present disclosure may include receiving an utterance speech uttered by a user who has approached within a predetermined distance from the communication robot, and selecting any one ASR module capable of processing the uttered speech among plural ASR modules as an optimized ASR module. According to the present disclosure, it is possible to improve user's satisfaction with the use of the communication robot by reducing the inconvenience that the user has to manually set a first language in the preprocessing operation in order to receive a service from the communication robot.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: September 6, 2022
    Assignee: LG Electronics Inc.
    Inventor: Gyeong Hun Kim
  • Patent number: 11416509
    Abstract: In some aspects, a computing system can receive, from a client device, a request to perform an analytical operation that involves a query regarding a common entity type. The computing system can extract a query parameter having a particular standardized entity descriptor for the common entity type and parse a transformed dataset that is indexed in accordance with standardized entity descriptors. The computing system can match the particular standardized entity descriptor from the query to records from the transformed dataset having index values with the particular standardized entity descriptor. The computing system can retrieve the subset of the transformed dataset having the index values with the particular standardized entity descriptor. In some aspects, the computing system can generate the transformed dataset by performing conversion operations that transform records in a data structure by converting a set of different entity descriptors into a standardized entity descriptor for the common entity type.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: August 16, 2022
    Assignee: EQUIFAX INC.
    Inventors: Piyushkumar Patel, Rajkumar Bondugula
  • Patent number: 11379670
    Abstract: A computerized method is disclosed including operations of receiving one or more request texts, including at least a first request text, automatically performing processing on the first request text to determine a most similar request text in a knowledge base, determining a degree of similarity between the first request text and the most similar request text, and in response to a comparison between the degree of similarity and a similarity threshold, retrieving, from the knowledge base, an answer corresponding to the most similar request text. Performing processing may include (i) removing stop words and punctuation and creating tokenized text, (ii) converting the tokenized text into a vector using a trained neural network, and (iii) performing an analysis of the vector with the entries of the knowledge base using one or more of a Word Mover's Distance (WMD) algorithm, or a Soft Cosine Measure (SCM) algorithm.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 5, 2022
    Assignee: Splunk, Inc.
    Inventors: Ningwei Liu, Wangyan Feng, Aaron Chan, Joel Fulton
  • Patent number: 11354485
    Abstract: In some embodiments, a method can include generating a resume document image having a standardized format, based on a resume document having a set of paragraphs. The method can further include executing a statistical model to generate an annotated resume document image from the resume document image. The annotated resume document image can indicate a bounding box and a paragraph type, for a paragraph from a set of paragraphs of the annotated resume document image. The method can further include identifying a block of text in the resume document corresponding to the paragraph of the annotated resume document image. The method can further include extracting the block of text from the resume document and associating the paragraph type to the block of text.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: June 7, 2022
    Assignee: iCIMS, Inc.
    Inventors: Eoin O'Gorman, Adrian Mihai
  • Patent number: 11347966
    Abstract: Artificial intelligence for machine learning to provide an optimized response sentence in reply to an input sentence.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: May 31, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Donghyeon Lee, Heriberto Cuayahuitl, Seonghan Ryu, Sungja Choi
  • Patent number: 11335348
    Abstract: The disclosure relates to a method, device, apparatus, and storage medium. The method includes recognizing voice data inputted by a user; obtaining a voice text corresponding to the voice data; obtaining, based on the voice text, a text to-be-input corresponding to the voice data, wherein the text to-be-input includes a plurality of words constituting a phrase or a sentence; and displaying the text to-be-input in an input textbox of an input interface.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: May 17, 2022
    Assignee: Beijing Xiaomi Mobile Software Co., Ltd.
    Inventors: Jiefu Tan, Senhua Chen, Dan Li, Xinyi Ren
  • Patent number: 11321675
    Abstract: Disclosed embodiments provide a computer-implemented technique for monitoring deviation from a meeting agenda. A meeting moderator and meeting agenda are obtained. Meeting dialog, along with facial expressions and/or body language of attendees is monitored. Natural language processing, using entity detection, disambiguation, and other language processing techniques, determines a level of deviation in the meeting dialog from the meeting agenda. Computer-implemented image analysis techniques ascertain participant engagement from facial expressions and/or gestures of participants. A deviation alert is presented to the moderator and/or meeting participants when a deviation is detected, allowing the moderator to steer the meeting conversation back to the planned agenda.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: May 3, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ira Allen, Blaine H. Dolph
  • Patent number: 11281465
    Abstract: A non-transitory computer readable recording medium has stored thereon instructions to be executed on a computer providing terminal device with a game. The recording medium includes, for example, a main program described with Japanese text data, and language data in which English text data is associated with identification information (hash value). The instructions cause the computer to perform the steps of: setting a language to be displayed on a display section; generating a retrieval key by performing data processing on the first data to be displayed that is included in the main program when the second language is set as a language to be displayed; and extracting the second data to be displayed that includes the identification information corresponding to the generated key, and replacing the first data to be displayed with the second data to be displayed to display the second data to be displayed on the display section.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: March 22, 2022
    Assignee: GREE, Inc.
    Inventors: Wataru Sakamoto, Ryosuke Nishida
  • Patent number: 11264035
    Abstract: A system and method of automatic transcription using a visual display device and an ear-wearable device. The system is configured to process an input audio signal at the display device to identify a first voice signal and a second voice signal from the input audio signal. A representation of the first voice signal and the second voice signal can be displayed on the display device and input can be received comprising the user selecting one of the first voice signal and the second voice signal as a selected voice signal. The system is configured to convert the selected voice signal to text data and display a transcript on the display device. The system can further generate an output signal sound at the first transducer of the ear-wearable device based on the input audio signal.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: March 1, 2022
    Assignee: Starkey Laboratories, Inc.
    Inventors: Achintya Kumar Bhowmik, David Alan Fabry, Amit Shahar, Clifford Anthony Tallman
  • Patent number: 11263409
    Abstract: A sign language translation system may capture infrared images of the formation of a sign language sign or sequence of signs. The captured infrared images may be used to produce skeletal joints data that includes a temporal sequence of 3D coordinates of skeletal joints of hands and forearms that produced the sign language sign(s). A hierarchical bidirectional recurrent neural network may be used to translate the skeletal joints data into a word or sentence of a spoken language. End-to-end sentence translation may be performed using a probabilistic connectionist temporal classification based approach that may not require pre-segmentation of the sequence of signs or post-processing of the translated sentence.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: March 1, 2022
    Assignee: BOARD OF TRUSTEES OF MICHIGAN STATE UNIVERSITY
    Inventors: Mi Zhang, Biyi Fang
  • Patent number: 11258872
    Abstract: Systems and methods are described herein for accelerating page rending times. In some embodiments, an intermediary proxy device may be utilized to accelerate page rending times experienced at an application operating at a user device. The intermediary proxy device may perform operations such as obtaining, on behalf of the application, first webpage content and second webpage content for a webpage, the first webpage content being obtained from the web server. In some embodiments, the intermediate proxy device may execute third-party code to obtain the second webpage content from a third party. In some embodiments, the intermediary proxy device may modify the second webpage content obtained from the content server from a first format to a second format the first webpage content and the second webpage content as modified to the application. The application may then perform one or more operations to present data at a display of the user device.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: February 22, 2022
    Assignee: Amazon Technologie, Inc.
    Inventor: Edward Dean Gogel
  • Patent number: 11240390
    Abstract: A server apparatus, a voice operation system, and a voice operation method, each of which: acquires, from an image forming apparatus, a language type of a display language used for display at the image forming apparatus; stores the language type of the display language; acquires, from a speaker, voice operation that instructs to change the display language; identifies a language type of a targeted language based on the voice operation; and determines whether the language type of the display language matches the language type of the targeted language. Based on a determination that the language type of the display language does not match the language type of the targeted language, the server apparatus instructs the image forming apparatus to change from the language type of the display language to the language type of the targeted language.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: February 1, 2022
    Assignee: Ricoh Company, Ltd.
    Inventor: Hajime Kubota
  • Patent number: 11227116
    Abstract: A translation device includes: a controller that extracts a proper noun candidate from an original sentence, generates a translation word of the proper noun candidate in a second language, generates a second translated sentence by translating the original sentence into the second language based on the proper noun candidate and the translation word of the proper noun candidate, and generates a second reverse-translated sentence by translating the second translated sentence into the first language based on the proper noun candidate and the translation word of the proper noun candidate; a display that displays the first reverse-translated sentence and the second reverse-translated sentence; and an operation unit that receives a user operation of selecting one of the first reverse-translated sentence and the second reverse-translated sentence.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: January 18, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: He Cai
  • Patent number: 11216621
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting text from an input document to generate one or more inference. Each inference box may be input into a machine learning network trained on training labels. Each training label provides a human-augmented version of output from a separate machine translation engine. A first translation may be generated by machine learning network. The first translation may be displayed in a user interface with respect to display of an original version of the input document and a translated version of a portion of the input document.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: January 4, 2022
    Assignee: Vannevar Labs, Inc.
    Inventors: Daniel Goodman, Nathanial Hartman, Nathaniel Bush, Brett Granberg
  • Patent number: 11217224
    Abstract: A multilingual text-to-speech synthesis method and system are disclosed. The method includes receiving first learning data including a learning text of a first language and learning speech data of the first language corresponding to the learning text of the first language, receiving second learning data including a learning text of a second language and learning speech data of the second language corresponding to the learning text of the second language, and generating a single artificial neural network text-to-speech synthesis model by learning similarity information between phonemes of the first language and phonemes of the second language based on the first learning data and the second learning data.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: January 4, 2022
    Assignee: NEOSAPIENCE, INC.
    Inventors: Taesu Kim, Younggun Lee
  • Patent number: 11205055
    Abstract: A management device is connected to an apparatus and configured to manage the apparatus. The management device includes a multi-language display processing unit configured to, when an input unit receives a change request to change a language of messages to be displayed on a display unit, transmit standard language data to a translation device, and a translated data reception unit configured to acquire translated data translated into a language corresponding to a language environment of a mobile terminal on the translation device with reference to the standard language data. The multi-language display processing unit is configured to change a language of messages to be displayed on the display unit from a default language to a language corresponding to the language environment of the mobile terminal by using the translated data.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: December 21, 2021
    Assignee: Mitsubishi Electric Corporation
    Inventor: Hiroaki Obana
  • Patent number: 11200510
    Abstract: A mechanism is provided for text classifier training. The mechanism receives a training set of text and class specification pairs to be used as a ground truth for training a text classifier machine learning model for a text classifier. Each text and class specification pair comprises a text and a corresponding class specification. A domain terms selector component identifies at least one domain term in the texts of the training set. A domain terms replacer component replaces the at least one identified domain term in the texts of the training set with a corresponding replacement term to form a revised set of text and class specification pairs. A text classifier trainer component trains the text classifier machine learning model using the revised set to form a trained text classifier machine learning model.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: December 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: John M. Boyer, Kshitij P. Fadnis, Dinesh Raghu
  • Patent number: 11190851
    Abstract: Various embodiments provide media based on a detected language being spoken. In one embodiment, the system electronically detects which language of a plurality of languages is being spoken by a user, such during a conversation or while giving a voice command to the television. Based on which language of a plurality of languages is being spoken by the user, the system electronically presents media to the user that is in the detected language. For example, the media may be television channels and/or programs that are in the detected language and/or a program guide, such as a pop-up menu, including such media that are in the detected language.
    Type: Grant
    Filed: October 29, 2020
    Date of Patent: November 30, 2021
    Assignee: SLING MEDIA PVT. LTD.
    Inventor: Rajesh Palaniswami
  • Patent number: 11188714
    Abstract: An electronic apparatus includes a voice receiving unit, a display unit, and a control unit. The control unit is configured to perform control so as to identify the language of a voice input received by the voice receiving unit. In a case where it is determined the identified language, which is a first language, is different from a second language set as a primary language in the electronic apparatus, the control unit is configured to display on the display unit, a message for confirming whether to change the primary language from the second language to the first language in both the first language and the second language.
    Type: Grant
    Filed: August 15, 2019
    Date of Patent: November 30, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Shimpei Kitai
  • Patent number: 11163954
    Abstract: Aspects of the invention include systems and methods for the propagation of annotation metadata to overlapping annotations of a synonymous type. A non-limiting example computer-implemented method includes performing a comparison of a set of annotations to detect a subset of annotations that are candidates of being synonymous based on a first analysis. Whether a first annotation of the subset of annotations is synonymous with a second annotation of the subset of annotations is determined based on a second analysis. Distinct annotation metadata of the first annotation are cross-propogated with annotation metadata of the second annotation based on the second analysis.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: November 2, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Scott Carrier, Brendan Bull, Dwi Sianto Mansjur, Paul Lewis Felt
  • Patent number: 11151335
    Abstract: A machine translation method includes using an encoder of a source language to determine a feature vector from a source sentence expressed in the source language, using an attention model of a target language to determine context information of the source sentence from the determined feature vector, and using a decoder of the target language to determine a target sentence expressed in the target language from the determined context information.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: October 19, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Hwidong Na
  • Patent number: 11137976
    Abstract: To provide audio information regarding locations within a geographic area, a client device provides an interactive three-dimensional (3D) display of panoramic street level imagery for a geographic area via a user interface. The panoramic street level imagery includes one or more landmarks. The client device receives a request for audio information describing a selected landmark within the interactive 3D display, and obtains the audio information describing the selected landmark from a server device in response to the request. Then the client device automatically presents the received audio information describing the selected landmark.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: October 5, 2021
    Assignee: GOOGLE LLC
    Inventors: Michael Edgcumbe, Rachel Inman, Kasey Klimes, Anna Roth
  • Patent number: 11132515
    Abstract: A method and a data processing device are disclosed for at least partially automatically transferring a word sequence composed in a source language into a word sequence in a target language with corresponding substantive content. By analyzing the word sequence and identifying terms with lexical ambiguity in the word sequence by comparing with a terminology database comprising terms with lexical ambiguity in the source language which are assigned a plurality of term identifiers depending on their number of meanings, an unambiguous term definition is provided for translating the word sequence into the target language by assigning a term identifier to the term with lexical ambiguity in the source language. This may render a machine translation less susceptible to errors.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: September 28, 2021
    Assignee: CLAAS Selbstfahrende Erntemaschinen GmbH
    Inventor: Ute Rummel
  • Patent number: 11115355
    Abstract: An information display method, and apparatus, and devices are provided. The method includes providing an information editing interface; receiving a first type of information input by a user in the information editing interface; obtaining a second type of information corresponding to the first type of information, wherein the second type of information is translation information of the first type of information; and displaying the second type of information in the information editing interface. By adopting the technical solutions of the present disclosure, when the user inputs information in the information editing interface, the information input by the user may be synchronously translated, the translated information is displayed to the user by the information editing interface, so during the process when the user inputs the information, the translation information of the then-input information may be seen in real time, and the translation information may be modified, thereby improving user experience.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: September 7, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Zekun Yan, Yufeng Wang, Yuan Li, You Wu, Qiang Li
  • Patent number: 11100928
    Abstract: A configuration is implemented to establish, with a processor, an interactive voice response system that is operable in a first human-spoken language. Further, the configuration receives, with the processor, a communication request through a designated communication channel for a second human-spoken language. The second human-spoken language is distinct from the first human-spoken language. Moreover, the configuration generates, with the processor, a simulated interactive voice response system that provides a service in the second human-spoken language. The simulated interactive voice response system routes a request in the second human-spoken language to a machine interpreter that translates the request into the first human-spoken language. The translated request is provided to the interactive voice response system to process the request in the first human-spoken language.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: August 24, 2021
    Assignee: Language Line Services, Inc.
    Inventors: Jeffrey Cordell, James Boutcher
  • Patent number: 11074413
    Abstract: Computer-based implementations of context-sensitive salient keyword unit surfacing for multi-language comments are disclosed. A set of target keyword units in a target written language are caused by a computing system to be presented in a graphical user interface such as, for example, as part of a tag cloud or the like. The set of target keyword units are determined by the system by a context-sensitive mapping of a set of source keyword units in an intermediate written language to the set of target keyword units. The context sensitive mapping is constructed based on in-context machine translation of survey comments in the target language to the intermediate language and then identifying translation keyword unit pairs in the target language survey comments and the translated survey comments that represent a mapping of the in-context translation of a keyword unit in the target language to a keyword unit in the intermediate language.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: July 27, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiaoran Zhang, Goutham Kurra, Chih-Po Wen, Shane Combest
  • Patent number: 11069341
    Abstract: The speech correction system includes a storage device, an audio receiver and a processing device. The processing device includes a speech recognition engine and a determination module. The storage device is configured to store a database. The audio receiver is configured to receive an audio signal. The speech recognition engine is configured to identify a key speech pattern in the audio signal and generate a candidate vocabulary list and a transcode corresponding to the key speech pattern; wherein the candidate vocabulary list includes a candidate vocabulary corresponding to the key speech pattern and a vocabulary score corresponding to the candidate vocabulary. The determination module is configured to determine whether the vocabulary score is greater than a score threshold. If the vocabulary score is greater than the score threshold, the determination module stores the candidate vocabulary corresponding to the vocabulary score in the database.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: July 20, 2021
    Assignee: QUANTA COMPUTER INC.
    Inventors: Yi-Ling Chen, Chih-Wei Sung, Yu-Cheng Chien, Kuan-Chung Chen
  • Patent number: 11032676
    Abstract: A method and system for transliteration of a textual message are provided. The method includes receiving, from a first network texting element, the textual message sent from a first mobile device and destined to a second mobile device, wherein the textual message comprises a first character set; determining if the first character set is supported by the second mobile device; determining a second character set supported by the second mobile device when the first character set is not supported by the second mobile device; transliterating the textual message to the second character set; and sending the transliterated textual message to a second network texting element.
    Type: Grant
    Filed: September 10, 2013
    Date of Patent: June 8, 2021
    Assignee: VascoDe Technologies Ltd.
    Inventors: Dorron Mottes, Gil Zaidman, Arnon Yaar
  • Patent number: 11003852
    Abstract: In an embodiment, a method includes presenting, on a display, sample text in a given language to a user. The method further includes recording eye fixation times for each word of the sample text for the user and recording saccade times between each pair of fixations of the sample text. The method further includes comparing features of the gaze pattern of the user to features of a gaze pattern of a plurality of training readers. Each training reader (e.g., training user) has a known native language. The method further generates a probability of at least one an estimated native language of the user based on the results of the comparison.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: May 11, 2021
    Assignee: Massachusetts Institute of Technology
    Inventors: Yevgeni Berzak, Boris Katz
  • Patent number: 10977451
    Abstract: The language translation system comprises a headphone, a personal data device, and a communication link. The communication link exchanges data between the headphone and the personal data device. The headphone captures the audible data associated with a first natural language. The headphone transmits the captured audible data associated with the first natural language to the personal data device. The headphone translates the captured audible data associated with the first natural language into a second natural language. The headphones simultaneously repeat the audible data associated with the first natural language and announces a translation in the second natural language over a plurality of speakers. The personal data device receives the captured audible data associated with the first natural language and associates the captured audible data with visual data associated with the second natural language. The personal data device displays the associated visual data.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: April 13, 2021
    Inventor: Benjamin Muiruri
  • Patent number: 10971132
    Abstract: An electronic system is provided. The electronic system includes a host, an audio output device and a display. The host includes an audio processing module, a relay processing module, a smart interpreter engine and a driver. The audio processing module is utilized for acquiring audio data corresponding to a first language from audio streams processed by an application program executed on the host. The smart interpreter engine is utilized for converting the audio data corresponding to the first language into text data corresponding to a second language. The relay processing module is utilized for transmitting the text data corresponding to the second language to the display for displaying. The driver is utilized for converting the data corresponding to the first language into an analog audio signal corresponding to the first language and transmitting the analog audio signal corresponding to the first language to the audio output device for playback.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: April 6, 2021
    Assignee: ACER INCORPORATED
    Inventors: Gianna Tseng, Szu-Ting Chou, Shang-Yao Lin, Shih-Cheng Huang
  • Patent number: 10956725
    Abstract: Methods, apparatus and systems for recognizing sign language movements using multiple input and output modalities. One example method includes capturing a movement associated with the sign language using a set of visual sensing devices, the set of visual sensing devices comprising multiple apertures oriented with respect to the subject to receive optical signals corresponding to the movement from multiple angles, generating digital information corresponding to the movement based on the optical signals from the multiple angles, collecting depth information corresponding to the movement in one or more planes perpendicular to an image plane captured by the set of visual sensing devices, producing a reduced set of digital information by removing at least some of the digital information based on the depth information, generating a composite digital representation by aligning at least a portion of the reduced set of digital information, and recognizing the movement based on the composite digital representation.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: March 23, 2021
    Assignee: AVODAH, INC.
    Inventors: Michael Menefee, Dallas Nash, Trevor Chandler
  • Patent number: 10933321
    Abstract: The present technology relates an information processing device that includes an acquisition unit that acquires a recognition result indicating at least one of a user movement or state recognized by a recognition unit utilizing a detection result from one or a plurality of sensors, and a control unit that, in a case of determining that the recognition result indicates a specific user movement or state, controls a degree of reflection in which information related to the specific user movement or state is reflected in notification information that transmits the information related to the specific user movement or state to a target different from the user, in accordance with the target.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: March 2, 2021
    Assignee: SONY CORPORATION
    Inventors: Yusuke Nakagawa, Tomohisa Takaoka, Shinichi Kawano
  • Patent number: 10929455
    Abstract: Embodiments build a knowledge base that includes a list of acronyms and their expansions. The list of acronyms may be associated with a particular organization, e.g. a product team, such that the acronym may have a different meaning to a different organization. In some embodiments, acronyms and their expansions are extracted from artifacts associated with the organization, e.g. documents, emails, attachments, calendar items, etc. Multiple potential definitions identified within the artifacts may be ranked based on contextual data extracted from the artifacts, e.g. who authored the artifact, when was the artifact modified, how often did the author use the acronym, an author's rank in the organization, how long has an author been part of the organization, an author's relationship to other authors, etc. By basing the analysis on artifacts associated with the organization the resulting definitions may be more accurate than if broader resources, such as dictionary definitions, were used.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Divya Jetley, Hong Hong, Xiaojiang Huang, Xiaocheng Deng, Yu Gu