Translation Patents (Class 704/277)
  • Patent number: 11295742
    Abstract: In a voice output apparatus, an acquisition unit acquires a speech of an occupant of a vehicle. A determination unit determines whether or not the acquired speech is asking for repetition or rephrasing. A classification unit classifies, when it is determined that the speech is the asking, the speech according to a type of asking. An output unit outputs a voice sound in accordance with the classified type of the asking based on a content of a voice sound that is a target of the asking.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: April 5, 2022
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kazuya Nishimura, Yoshihiro Oe, Naoki Uenoyama, Hirofumi Kamimaru
  • Patent number: 11272137
    Abstract: This disclosure describes techniques that include modifying text associated with a sequence of images or a video sequence to thereby generate new text and overlaying the new text as captions in the video sequence. In one example, this disclosure describes a method that includes receiving a sequence of images associated with a scene occurring over a time period; receiving audio data of speech uttered during the time period; transcribing into text the audio data of the speech, wherein the text includes a sequence of original words; associating a timestamp with each of the original words during the time period; generating, responsive to input, a sequence of new words; and generating a new sequence of images by overlaying each of the new words on one or more of the images.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: March 8, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Vincent Charles Cheung, Marc Layne Hemeon, Nipun Mathur
  • Patent number: 11256881
    Abstract: Techniques are disclosed for data valuation using language-neutral content addressing techniques in an information processing system. For example, a method comprises the following steps. The method obtains original content in an original language. The method generates a language-neutral representation of the original content. The method then generates an object comprising the language-neutral representation of the original content and at least one valuation algorithm, wherein the at least one valuation algorithm is configured to perform content valuation. The method generates a cryptographic hash value of the object, and stores the object for access using the cryptographic hash value.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: February 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Stephen J. Todd, Mikhail Danilov
  • Patent number: 11250837
    Abstract: A speech synthesis system includes an operating interface, a storage unit and a processor. The operating interface provides a plurality of language options for a user to select one output language option therefrom. The storage unit stores a plurality of acoustic models. Each acoustic model corresponds to one of the language options and includes a plurality of phoneme labels corresponding to a specific vocal. The processor receives a text file and generates output speech data corresponding to the specific vocal according to the text file, a speech synthesizer, and one of the acoustic models which corresponds to the output language option.
    Type: Grant
    Filed: December 1, 2019
    Date of Patent: February 15, 2022
    Assignee: INSTITUTE FOR INFORMATION INDUSTRY
    Inventors: Guang-Feng Deng, Cheng-Hung Tsai, Han-Wen Liu, Chih-Chung Chien, Chuan-Wen Chen
  • Patent number: 11238348
    Abstract: Systems and methods for neural machine translation are provided. In one example, a neural machine translation system translates text and comprises processors and a memory storing instructions that, when executed by at least one processor among the processors, cause the system to perform operations comprising, at least, obtaining a text as an input to a neural network system, supplementing the input text with meta information as an extra input to the neural network system, and delivering an output of the neural network system to a user as a translation of the input text, leveraging the meta information for translation.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: February 1, 2022
    Assignee: eBay Inc.
    Inventors: Evgeny Matusov, Wenhu Chen, Shahram Khadivi
  • Patent number: 11222652
    Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: January 11, 2022
    Assignee: APPLE INC.
    Inventors: Ante Jukic, Mehrez Souden, Joshua D. Atkins
  • Patent number: 11222373
    Abstract: In an example embodiment, text is received at an ecommerce service from a first user, the text in a first language and pertaining to a first listing on the ecommerce service. Contextual information about the first listing may be retrieved. The text may be translated to a second language. Then, a plurality of text objects, in the second language, similar to the translated text may be located in a database, each of the text objects corresponding to a listing. Then, the plurality of text objects similar to the translated text may be ranked based on a comparison of the contextual information about the first listing and contextual information stored in the database for the listings corresponding to the plurality of text objects similar to the translated text. At least one of the ranked plurality of text objects may then be translated to the first language.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: January 11, 2022
    Assignee: eBay Inc.
    Inventor: Yan Chelly
  • Patent number: 11205416
    Abstract: An utterance detection apparatus includes a processor configured to: detect an utterance start based on a first sound pressure based on first audio data acquired from a first microphone and a second sound pressure based on second audio data acquired from a second microphone; suppress an utterance start direction sound pressure when the utterance start direction sound pressure, which is one of the first sound pressure and the second sound pressure being larger at a time point of detecting the utterance start, falls below a non-utterance start direction sound pressure, which is the other one of the first sound pressure and the second sound pressure being smaller at the time point of detecting the utterance start; and detect an utterance end based on the suppressed utterance start direction sound pressure.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: December 21, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Nobuyuki Washio, Chisato Shioda, Masanao Suzuki
  • Patent number: 11198196
    Abstract: The present disclosure relates to a method of modifying a surface of a material, in situ, while the material is being used to at least one of form or modify a portion of a part to remove flaws layer-by-layer and improve a part from a layerwise built, or a coating. The method may involve generating first, second and third beams. The third beam may act on a surface of a material to heat a portion of the surface of the material into a flowable state to thus modify a surface characteristic of the material. The first beam may control an optically addressable light valve (OALV) which modifies an energy of the third beam. The second beam may control an optically addressable electric field modulator (OAEFM) to generate an electric field in a vicinity of the surface and to influence a movement of the portion of material while the portion of material is in the flowable state. The beams are modulated based on a sensing element feedback loop.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: December 14, 2021
    Assignee: Lawrence Livermore National Security, LLC
    Inventors: Selim Elhadj, Jae Hyuck Yoo
  • Patent number: 11183173
    Abstract: Disclosed is an artificial intelligence voice recognition apparatus including: a microphone configured to receive a voice command; a memory configured to store a first voice recognition algorithm; a communication module configured to transmit the voice command to a server system and receive first voice recognition algorithm-related update data from the server system; and a controller configured to perform control to update the first voice recognition algorithm, which is stored in the memory, based on the first voice recognition algorithm-related update data. Accordingly, the voice recognition apparatus is able to provide a voice recognition algorithm fitting to a user's characteristics.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: November 23, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Joongeon Park, Duho Ro, Sungshin Lee
  • Patent number: 11182180
    Abstract: Methods and systems for previewing an application user interface (UI) for multiple locales are described herein. A first device, on which an application capable of rendering views for multiple locales, may receive selections of a first locale and a second locale from a second device via a web console running on the second device. The first device may render a plurality of UI screens including a first UI screen, corresponding to a current view of the application, for the first locale, and a second UI screen, corresponding to the current view, for the second locale. The first device may generate screenshots of the plurality of UI screens and send the generates screenshots to the second device to be displayed on the web console. A developer of the application may inspect the multi-locale UI of the application through the displayed screenshots and make any necessary adjustments if necessary.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: November 23, 2021
    Assignee: Citrix Systems, Inc.
    Inventors: Yang Wang, Jingxin Peng
  • Patent number: 11159684
    Abstract: An image forming system is configured to receive an input of natural language speech. Regardless of whether the natural language speech includes a combination of first words or second words, the image forming system can recognize the natural language speech as an instruction to select a specific print setting displayed on a screen.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: October 26, 2021
    Assignee: Canon Kabushiki Kaisha
    Inventors: Toru Takahashi, Yuji Naya, Takeshi Matsumura
  • Patent number: 11119722
    Abstract: An embodiment of the present invention controls a mobile body device to carry out a natural action. A mobile body control device (1) includes: an image acquiring section (21) configured to acquire an image of a surrounding environment of a specific mobile body device; and a control section (2) which is configured to (i) refer to the image and infer, in accordance with the image, a scene in which the specific mobile body device is located, (ii) determine an action in accordance with the scene inferred, and (iii) control the mobile body device to carry out the action determined.
    Type: Grant
    Filed: August 10, 2017
    Date of Patent: September 14, 2021
    Assignee: SHARP KABUSHIKI KAISHA
    Inventor: Shinji Tanaka
  • Patent number: 11100926
    Abstract: The disclosure provides an intelligent voice system and a method for controlling a projector. The system includes a voice assistant, a cloud service platform, a projector, and a management server. When the voice assistant receives a voice signal for controlling the projector, the voice assistant extracts keywords from the voice signal and transmits the keywords to the cloud service platform, wherein the keywords include an alias corresponding to the projector and a first control command, and the cloud service platform includes second control commands. The cloud service platform analyzes the first control command, retrieves the corresponding second control command according to the first control command, and transmits the alias of the projector and the corresponding second control command to the management server. The management server accesses/controls the projector in response to the alias and adjusts the projector as a first operating state according to the corresponding second control command.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: August 24, 2021
    Assignee: Coretronic Corporation
    Inventors: Ming-Cheng Lin, Yu-Meng Chen, Wei-Hsin Kan, Ji-Cheng Dai
  • Patent number: 11094326
    Abstract: One embodiment of the present invention sets forth a technique for performing ensemble modeling of ASR output. The technique includes generating input to a machine learning model from snippets of voice activity in the recording and transcriptions produced by multiple automatic speech recognition (ASR) engines from the recording. The technique also includes applying the machine learning model to the input to select, based on transcriptions of the snippet produced by at least one contributor ASR engine of the multiple ASR engines and at least one selector ASR engine of the multiple ASR engines, a best transcription of the snippet from possible transcriptions of the snippet produced by the multiple ASR engines. The technique further includes storing the best transcription in association with the snippet.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: August 17, 2021
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Ahmad Abdulkader, Mohamed Gamal Mohamed Mahmoud
  • Patent number: 11062615
    Abstract: Embodiments of the present application relate to language learning techniques. According to exemplary embodiments, a pronunciation dictionary and/or a verse dictionary may be provided. The pronunciation dictionary and/or verse dictionary may be used to train a student's pronunciation of words and phrases, particularly in cantillated languages. According to some embodiments, words appearing on a screen may be visually distinguished (e.g., highlighted) in a sequence of a text. The text may be made to change smoothly and continuously in a manner that allows the changes to be easily followed by a student with a disability. Further embodiments provide techniques for performing generalized forced alignment. For example, forced alignment may be performed based on a phonetic analysis, based on an analysis of pitch patterns, and/or may involve breaking a large audio file into smaller audio files on a verse-by-verse basis.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: July 13, 2021
    Assignee: Intelligibility Training LLC
    Inventors: Michael Speciner, Norman Abramovitz, Alice J. Stiebel, Jonathan Stiebel
  • Patent number: 11062228
    Abstract: Examples of the present disclosure describe systems and methods of transfer learning techniques for disparate label sets. In aspects, a data set may be accessed on a server device. The data set may comprise labels and word sets associated with the labels. The server device may induce label embedding within the data set. The embedded labels may be represented by multi-dimensional vectors that correspond to particular labels. The vectors may be used to construct label mappings for the data set. The label mappings may be used to train a model to perform domain adaptation or transfer learning techniques. The model may be used to provide results to a statement/query or to train a different model.
    Type: Grant
    Filed: July 6, 2015
    Date of Patent: July 13, 2021
    Assignee: Microsoft Technoiogy Licensing, LLC
    Inventors: Young-Bum Kim, Ruhi Sarikaya
  • Patent number: 11056116
    Abstract: Local wireless networking is used to establish a multi-language translation group between users that speak more than two different languages such that when one user speaks into his or her computing device, that user's computing device may perform automated speech recognition, and in some instances, translation to a different language, to generate non-audio data for communication to the computing devices of other users in the group. The other users' computing devices then generate spoken audio outputs suitable for their respective users using the received non-audio data. The generation of the spoken audio outputs on the other users' computing devices may also include performing a translation, thereby enabling each user to receive spoken audio output in their desired language in response to speech input from another user, and irrespective of the original language of the speech input.
    Type: Grant
    Filed: April 11, 2018
    Date of Patent: July 6, 2021
    Assignee: GOOGLE LLC
    Inventors: Shijing Xian, Deric Cheng
  • Patent number: 11049501
    Abstract: One embodiment provides a method that includes obtaining a default language corpus. A second language corpus is obtained based on a second language preference. A first transcription of an utterance is received using the default language corpus and natural language processing (NLP). At least one problem word in the first transcription is determined based on an associated grammatical relevance to neighboring words in the first transcription. Upon determining that a first probability score is below a first threshold, an acoustic lookup is performed for an audible match for the problem word in the first transcription based on an associated acoustical relevance. Upon determining that a second probability score is below a second threshold, it is determined whether a match for the problem word exists in the secondary language corpus. Upon determining that the match exists in the secondary language corpus, a second transcription for the utterance is provided.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: June 29, 2021
    Assignee: International Business Machines Corporation
    Inventors: Raphael Arar, Chris Kau, Robert J. Moore, Chung-hao Tan
  • Patent number: 10996930
    Abstract: Assisting automation of repeated edits of code by automated generation of rules that, when applied, perform code transformations. The transformations are synthesized while observing developers make repeated code edits, and automatically perform similar modifications as those observed. This synthesized transformation defines an initial state of code to which the transformation can be applied, and defines a modification from that initial state. A rule is then generated that includes a detector mechanism that, when selected, is configured to find locations in code that have the defined corresponding initial state of the corresponding transformation. Thus, the transformation may be applied to any code to which the rule is exposed.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 4, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Alistair Wilson-Thomas, Gustavo Araujo Soares, Peter Groenewegen, Jonathan Preston Carter, German David Obando Chacon
  • Patent number: 10977513
    Abstract: A method for identifying information carried on a sheet is disclosed. The method comprises: identifying, using one or more computing devices, each of one or more areas on the sheet based on an image of the sheet and a pre-trained first model, wherein each of the one or more areas is associated with all or part of the information carried on the sheet, and the first model is a neural network based model; and identifying, using one or more computing devices, characters in each of the one or more areas based on the image of the sheet, each of the one or more areas and a pre-trained second model so as to determine the information carried on the sheet, wherein the second model is a neural network based model.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: April 13, 2021
    Assignee: Hangzhou Glorify Software Limited
    Inventors: Qingsong Xu, Mingquan Chen, Huan Luo
  • Patent number: 10977444
    Abstract: Disclosed is a method and a system for identifying key terms in a digital document. The method comprises providing the digital document and analysing the digital document to identify key terms in the digital document. The digital document includes a first text in a first language. Furthermore, analysing the digital document comprises translating the first text in the first language to obtain a second text in a second language, translating the first text in the first language to obtain a third text in a third language, translating the obtained second text in the second language to obtain a fourth text in the third language, comparing at least one pair of first text, second text, third text and fourth text to identify at least one set of similar text between the compared at least one pair, and processing the set of similar text to obtain key terms in the digital document.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: April 13, 2021
    Assignee: Innoplexus AG
    Inventors: Gaurav Tripathi, Vatsal Agarwal, Sudhanshu Shekhar
  • Patent number: 10963223
    Abstract: Assisting automation of repeated edits of code by automated generation of rules that, when applied, perform code transformations. The transformations are synthesized while observing developers make repeated code edits, and automatically perform similar modifications as those observed. This synthesized transformation defines an initial state of code to which the transformation can be applied, and defines a modification from that initial state. A rule is then generated that includes a detector mechanism that, when selected, is configured to find locations in code that have the defined corresponding initial state of the corresponding transformation. Thus, the transformation may be applied to any code to which the rule is exposed.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: March 30, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mark Alistair Wilson-Thomas, Gustavo Araujo Soares, Peter Groenewegen, Jonathan Preston Carter, German David Obando Chacon
  • Patent number: 10936813
    Abstract: A context-aware spell checker to detect non-word spelling errors and/or suggest corrections. The context-aware spell checker may utilize n-gram conditional probabilities to suggest corrections based on a context of the non-word spelling error. The suggested corrections may be presented as a prioritized list of words based on calculated scores of the n-gram conditional probabilities. Utilizing n-gram conditional probabilities may permit the context-aware spell checker to be integrated across a multitude of languages or configured according to a particular language. The context-aware spell checker may perform spell checking and suggest corrections in real-time, or may be at least partially automated, to reduce user perceived latency and delay.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: March 2, 2021
    Assignee: Amazon Technologies, Inc.
    Inventor: Prabhakar Gupta
  • Patent number: 10915183
    Abstract: An electronic messaging method is provided, the method implemented by one or more processors. The method includes launching a textual communication application by a user device including a user interface. In the user interface a data entry interface is enabled including language elements in a particular language determined based on an international calling code of a stored textual communication involving a user of the user device or a language of a stored textual communication involving a user of the user device, the stored textual communication comprising text transmitted by the user of the user device or text received by the user of the user device from a particular party. Textual input is received via the data entry interface including the language elements in the particular language.
    Type: Grant
    Filed: March 30, 2018
    Date of Patent: February 9, 2021
    Assignee: Avast Software s.r.o.
    Inventors: Thomas Wespel, Rajarshi Gupta
  • Patent number: 10909582
    Abstract: Systems and methods for providing authentication circles to pursue financial goals and/or share expenses with others are provided. One or more provider computing systems are communicatively coupled to one or more user devices. Users may join a circle and make contributions via electronic messages that may allow for acceptance in a one-click fashion. Members may, for example, plan for and share expenses for a trip and compare the expenses with budgets.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: February 2, 2021
    Assignee: Wells Fargo Bank, N.A.
    Inventors: Balin Kina Brandt, Laura Fisher, Marie Jeannette Floyd, Katherine J. McGee, Teresa Lynn Rench, Sruthi Vangala
  • Patent number: 10848443
    Abstract: Embodiments of the disclosure provide systems and methods for utilizing chatbots to support interactions with human users and more particularly to using multiple chatbots from different communication channels and/or in different domains to support an interaction with a user on a single communication channel and/or in a single domain. Generally speaking, embodiments of the present disclosure are directed to allowing multiple chatbots that operate on different communication channels and/or in different domains to socialize amongst one another. This socialization of chatbots operating on different communication channels and/or in different domains allows each chatbot to call upon one another to help engage in a transaction with a customer. In addition to facilitating a communication session in which two or more chatbots are socialized together to help prepare coherent responses to user inputs, the socialization of chatbots can also facilitate the automated training of chatbots.
    Type: Grant
    Filed: July 23, 2018
    Date of Patent: November 24, 2020
    Assignee: Avaya Inc.
    Inventor: Ahmed Helmy
  • Patent number: 10824817
    Abstract: A facility for representing a mandate occurring in an authority document with a control is described. For each of one or more controls in a set of existing controls, the facility determines a similarity score measuring the similarity of the mandate and the control; where the similarity score exceeds a similarity threshold, the facility links the mandate to the control. Where the mandate is not linked to any control in the set of controls, the facility adds a control to the set of controls that is based on the mandate, and links the mandate to the added control.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: November 3, 2020
    Assignee: Unified Compliance Framework (Network Frontiers)
    Inventors: Dorian J. Cougias, Vicki McEwen, Steven Piliero, Lucian Hontau, Zike Huang, Sean Kohler
  • Patent number: 10795686
    Abstract: Aspects of the present invention provide devices with a first computer processor that in response to receiving a token from an agent associated with a second computer processor, returns language requirements to the agent associated with the second computer processor identified by the token for translating first data processed by the second computer processor. The translated first data is returned by the second computer processor to a third computer processor. The second computer processor and the third computer processor are different computer processors.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: October 6, 2020
    Assignee: International Business Machines Corporation
    Inventor: Arthur De Magalhaes
  • Patent number: 10789431
    Abstract: A method and system for translating a source sentence in a first language into a target sentence in a second language is disclosed. The method comprises acquiring the source sentence and generating, a first translation hypothesis and a second translation model. A first score value is assigned to the first translation hypothesis, the first score value being representative of a likelihood of the first translation hypothesis is one of a semantically illogical translation. A second score value is assigned to the first translation hypothesis, the second score value being representative of an expected difference in translation quality between the first translation hypothesis and the second translation hypothesis. The target sentence corresponds to: the first translation hypothesis, upon determining that both the first score value and the second score value meet a condition; and the second translation hypothesis, upon determining that the condition is not met.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: September 29, 2020
    Assignee: YANDEX EUROPE AG
    Inventors: Sergey Dmitrievich Gubanov, Anton Aleksandrovich Dvorkovich, Boris Andreevich Kovarsky, Mikhail Alekseevich Nokel, Aleksey Anatolievich Noskov, Anton Viktorovich Frolov
  • Patent number: 10755092
    Abstract: An image forming apparatus including: an image scanning unit scanning an image of an original document; an image forming unit forming the image onto a recording sheet; a text extraction section extracting a text area from the image for each of various kinds of languages; an editing section, for each of the various kinds of languages, giving color different from one another to the text area or selectively deleting the text area; and a control section controlling the image forming unit so as to cause the image forming unit to form the text area for each of the various kinds of languages on the recording sheet in respective color given to the text area or to form a text area, which is among the text areas of the various kinds of languages and is not being deleted by the editing section, on the recording sheet.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: August 25, 2020
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Nobuhiro Hara
  • Patent number: 10747817
    Abstract: Systems and methods for a media guidance application that generates results in multiple languages for search queries. In particular, the media guidance application ranks search results according the language model associated with the search result.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: August 18, 2020
    Assignee: Rovi Guides, Inc.
    Inventor: Arun Sreedhara
  • Patent number: 10733386
    Abstract: A terminal device includes: a sound receiving device configured to receive a sound emitted according to an audio signal to generate a received-audio signal, the audio signal including an audio signal that represents a guide voice and including a modulated signal that includes identification information of the guide voice; an information extractor configured to extract the identification information from the received-audio signal generated by the sound receiving device; a transmitter configured to transmit an information request that includes the identification information extracted by the information extractor; an acquisitor configured to acquire, from among multiple pieces of related information that correspond to multiple pieces of identification information, a piece of related information that corresponds to the identification information in the information request; and an output device configured to output the piece of related information acquired by the acquisitor.
    Type: Grant
    Filed: July 27, 2015
    Date of Patent: August 4, 2020
    Assignee: YAMAHA CORPORATION
    Inventors: Shota Moriguchi, Takahiro Iwata, Yuki Seto, Hiroyuki Iwase
  • Patent number: 10691400
    Abstract: An information management system includes: an audio signal acquisitor configured to acquire an audio signal representing a guide voice; a related information acquisitor configured to acquire related information that is related to the guide voice; an association manager configured to associate the related information acquired by the related information acquisitor for the guide voice with identification information that is notified to a terminal device upon emission of the guide voice corresponding to the audio signal; and an information provider configured to receive from the terminal device an information request including the identification information notified to the terminal device and to transmit to the terminal device the related information associated by the association manager with the identification information.
    Type: Grant
    Filed: July 27, 2015
    Date of Patent: June 23, 2020
    Assignee: YAMAHA CORPORATION
    Inventors: Shota Moriguchi, Takahiro Iwata, Yuki Seto
  • Patent number: 10657375
    Abstract: In certain embodiments, augmented-reality-based currency conversion may be facilitated. In some embodiments, a wearable device (or other device of a user) may capture a live video stream of the user's environment. One or more indicators representing at least one of a currency or units of the currency may be determined from the live video stream, where at least one of the indicators correspond to a feature in the live video stream. Based on the indicators from the live video stream, a predicted equivalent price corresponding to the units of the currency may be generated for a user-selected currency. In some embodiments, the corresponding feature in the live video stream may be continuously tracked, and, based on the continuous tracking, the corresponding feature may be augmented in the live video stream with the predicted equivalent price.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: May 19, 2020
    Assignee: Capital One Services, LLC
    Inventors: Joshua Edwards, Abdelkader Benkreira, Michael Mossoba
  • Patent number: 10657972
    Abstract: A method to interactively convert a source language video/audio stream into one or more target languages in high definition video format using a computer. The spoken words in the converted language are synchronized with synthesized movements of a rendered mouth. Original audio and video streams from pre-recorded or live sermons are synthesized into another language with the original emotional and tonal characteristics. The original sermon could be in any language and be translated into any other language. The mouth and jaw are digitally rendered with viseme and phoneme morphing targets that are pre-generated for lip synching with the synthesized target language audio. Each video image frame has the simulated lips and jaw inserted over the original. The new audio and video image then encoded and uploaded for internee viewing or recording to a storage medium.
    Type: Grant
    Filed: August 3, 2018
    Date of Patent: May 19, 2020
    Inventors: Max T. Hall, Edwin J. Sarver
  • Patent number: 10642934
    Abstract: An augmented conversational understanding architecture may be provided. Upon receiving a natural language phrase from a user, the phrase may be translated into a search phrase and a search action may be performed on the search phrase.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: May 5, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Larry Paul Heck, Madhusudan Chinthakunta, David Mitby, Lisa Stifelman
  • Patent number: 10614173
    Abstract: The disclosed subject matter provides a system, computer readable storage medium, and a method providing an audio and textual transcript of a communication. A conferencing services may receive audio or audio visual signals from a plurality of different devices that receive voice communications from participants in a communication, such as a chat or teleconference. The audio signals representing voice (speech) communications input into respective different devices by the participants. A translation services server may receive over a separate communication channel the audio signals for translation into a second language. As managed by the translation services server, the audio signals may be converted into textual data. The textual data may be translated into text of different languages based the language preferences of the end user devices in the teleconference. The translated text may be further translated into audio signals.
    Type: Grant
    Filed: July 9, 2019
    Date of Patent: April 7, 2020
    Assignee: Google LLC
    Inventors: Trausti Kristjansson, John Huang, Yu-Kuan Lin, Hung-ying Tyan, Jakob David Uszkoreit, Joshua James Estelle, Chung-yi Wang, Kirill Buryak, Yusuke Konishi
  • Patent number: 10606960
    Abstract: A system and method to facilitate translation of communications between entities over a network are provided. The system receives a first language construct transmitted by a first entity that is directed to a second entity. The system identifies a construct identifier corresponding to the first language construct and determines a language preference of the second entity. A second language construct is retrieved by the system by locating an entry in a translated construct table that contains both the construct identifier and a language identifier corresponding to the language preference of the second entity, whereby the second language construct is a translation of the first language construct into a second language corresponding to the language preference. The second language construct is used to update information associated with the first entity.
    Type: Grant
    Filed: December 1, 2016
    Date of Patent: March 31, 2020
    Assignee: eBay Inc.
    Inventor: Steve Grove
  • Patent number: 10599864
    Abstract: Systems and methods for sensitive audio zone rearrangement are provided that protects confidential and sensitive information such as user identifier during query processing and authentication. The sensitive information rearrangement system generates or permutes the actual user identifier in a privacy preserving manner. The sensitive information is extracted from an input being either a speech or DTMF tones, and a virtual user identifier is generated in real time, that is specific to a transaction to be performed, or a query initiation by a user in real-time. The sensitive information is rearranged which can be either DTMF tone or speech of user to generate the virtual user identifier.
    Type: Grant
    Filed: March 15, 2016
    Date of Patent: March 24, 2020
    Assignee: Tata Consultancy Services Limited
    Inventors: Sutapa Mondal, Sumesh Manjunath, Rohit Saxena, Manish Shukla, Purushotam Gopaldas Radadia, Shirish Subhash Karande, Sachin Premsukh Lodha
  • Patent number: 10593323
    Abstract: A keyword generation apparatus, comprises a vocabulary acquisition unit that acquires a keyword uttered by a first user; a first positional information acquisition unit that acquires first positional information including information representing a location at which the first user has uttered the keyword; a storage unit that stores the first positional information and the keyword in association with each other; a second positional information acquisition unit that acquires second positional information including information representing a current position of a second user; and an extraction unit that extracts a keyword unique to a locality in which the second user is positioned from the storage unit based on the second positional information.
    Type: Grant
    Filed: September 15, 2017
    Date of Patent: March 17, 2020
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Takuma Minemura, Sei Kato, Junichi Ito, Youhei Wakisaka, Atsushi Ikeno, Toshifumi Nishijima, Fuminori Kataoka, Hiromi Tonegawa, Norihide Umeyama
  • Patent number: 10542323
    Abstract: An approach to modifying text captioning is provided, comprising receiving, by a captioning modifier program, input data associated with a video stream, analyzing, by the captioning modifier program, the input data, altering, by the captioning modifier program, text captioning associated with the video stream to indicate eventful aspects based on an analysis of the input data and generating, by the captioning modifier program, supplementary information associated with the video stream based on the analysis and providing the supplementary information as an addition to the text captioning.
    Type: Grant
    Filed: December 16, 2017
    Date of Patent: January 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Philip J. Chou, Rajaram B. Krishnamurthy, Christine D. Mikijanic, Conner W. Simmons
  • Patent number: 10536464
    Abstract: A login engine for a network device can operate to secure or determine login access in response to a login request according to different sensor data of one or more physical properties. In response to the login request, a first sensor related to a first physical property can operate to detect individual characteristic data and content data. The individual characteristic data can be compared other individual characteristic data of a user profile, which can be stored in a data store. The content data can also be compared to other content data of the user profile. Based on these comparisons satisfying predetermined thresholds, a successful login stage of a plurality of login stages can be enabled.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: January 14, 2020
    Assignee: Intel Corporation
    Inventor: Dietmar Schoppmeier
  • Patent number: 10496714
    Abstract: In general, the subject matter described in this specification can be embodied in methods, systems, and program products for receiving user input that defines a search query, and providing the search query to a server system. Information that a search engine system determined was responsive to the search query is received at a computing device. The computing device is identified as in a first state, and a first output mode for audibly outputting at least a portion of the information is selected. The first output mode is selected from a collection of the first output mode and a second output mode. The second output mode is selected in response to the computing device being in a second state and is for visually outputting at least the portion of the information and not audibly outputting the at least portion of the information. At least the portion of information is audibly output.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: December 3, 2019
    Assignee: Google LLC
    Inventors: John Nicholas Jitkoff, Michael J. Lebeau, William J. Byrne, David P. Singleton
  • Patent number: 10489399
    Abstract: Methods, systems, and apparatus, including computer program products, for identifying the language of a search query. In one embodiment, the language of each term of a query is determined from the query terms and the language of the user interface a user used to enter the query. In another embodiment, an automatic interface language classifier is generated from a collection of past queries each submitted by a user. In some embodiments, a score is determined for each of multiple languages, each score indicating a likelihood that the query language is the corresponding one of the multiple languages.
    Type: Grant
    Filed: July 11, 2017
    Date of Patent: November 26, 2019
    Assignee: Google LLC
    Inventor: Fabio Lopiano
  • Patent number: 10453108
    Abstract: In an example embodiment, input is received from a first user of a computer system. A text object relating to a first item from the input is created, and translated from a first language to a second language. A plurality of text objects, in the second language, having text similar to the translated text object, are located in a database, each text object comprising textual information pertaining to the first item. The plurality of text objects having text similar to the translated text are then ranked based on a comparison of the contextual information about the first item and the contextual information stored in the database for the plurality of text objects having text similar to the translated text object. At least one of the ranked text objects is translated to the first language.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: October 22, 2019
    Assignee: eBay Inc.
    Inventor: Yan Chelly
  • Patent number: 10438590
    Abstract: A method for voice recognition includes acquiring a sound input, obtaining a plurality of feedback results from a plurality of recognition engines different from each other, and determining a recognition result of the sound input based on the plurality of feedback results.
    Type: Grant
    Filed: November 7, 2017
    Date of Patent: October 8, 2019
    Assignee: LENOVO (BEIJING) CO., LTD.
    Inventors: Shuyong Liu, Yuanyi Zhang, Hongwei Li
  • Patent number: 10430523
    Abstract: A terminal control method may include: establishing, by a first terminal used by a first user who speaks a first language, connection to a second terminal used by a second user who speaks a second language different from the first language; and acquiring a second-language greeting transliterated into the first language.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: October 1, 2019
    Assignee: Hyperconnect, Inc.
    Inventor: Sangil Ahn
  • Patent number: 10395658
    Abstract: An apparatus comprising a memory and a processor coupled to the memory. The processor receives input from a user, processes a first portion of the input via more than one service module while receiving a second portion of the input to determine a first speculative result, wherein processing the first portion of the input comprises executing at least one service module coupled to a corresponding speculation buffer, processes a second portion of the input via the more than one service module to determine a second speculative result, wherein processing the second portion of the input comprises executing the at least one service module coupled to the corresponding speculation buffer, processes the input via the more than one service module to determine a final output, wherein processing the input comprises executing the at least one service module coupled to the corresponding speculation buffer, and outputs the final output to the user.
    Type: Grant
    Filed: May 22, 2017
    Date of Patent: August 27, 2019
    Assignee: International Business Machines Corporation
    Inventors: Christopher M. Durham, Anne E. Gattiker, Thomas S. Hubregtsen, Inseok Hwang, Jinho Lee
  • Patent number: 10372831
    Abstract: The disclosed subject matter provides a system, computer readable storage medium, and a method providing an audio and textual transcript of a communication. A conferencing services may receive audio or audio visual signals from a plurality of different devices that receive voice communications from participants in a communication, such as a chat or teleconference. The audio signals representing voice (speech) communications input into respective different devices by the participants. A translation services server may receive over a separate communication channel the audio signals for translation into a second language. As managed by the translation services server, the audio signals may be converted into textual data. The textual data may be translated into text of different languages based the language preferences of the end user devices in the teleconference. The translated text may be further translated into audio signals.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: August 6, 2019
    Assignee: Google LLC
    Inventors: Trausti Kristjansson, John Huang, Yu-Kuan Lin, Hung-ying Tyan, Jakob David Uszkoreit, Joshua James Estelle, Chung-yih Wang, Kirill Buryak, Yusuke Konishi