Patents Examined by Abdelali Serrou
  • Patent number: 10860803
    Abstract: A system is described which accepts corporate title and employee data associated with that corporate title data at a first company, putting the corporate title and employee data through a configured network and generating a vector of terms and a set of coefficients associated with that title. Information about an employee is put through a second network using those terms and coefficients to determine if the employee would have the same or similar title at the first company.
    Type: Grant
    Filed: May 2, 2018
    Date of Patent: December 8, 2020
    Assignee: 8x8, Inc.
    Inventors: Solomon Fung, Soumyadeb Mitra, Abishek Kashyap, Arunim Samat, Venkat Nagaswamy, Justin Driemeyer
  • Patent number: 10818311
    Abstract: An auditory selection method based on a memory and attention model, including: step S1, encoding an original speech signal into a time-frequency matrix; step S2, encoding and transforming the time-frequency matrix to convert the matrix into a speech vector; step S3, using a long-term memory unit to store a speaker and a speech vector corresponding to the speaker; step S4, obtaining a speech vector corresponding to a target speaker, and separating a target speech from the original speech signal through an attention selection model. A storage device includes a plurality of programs stored in the storage device. The plurality of programs are configured to be loaded by a processor and execute the auditory selection method based on the memory and attention model. A processing unit includes the processor and the storage device.
    Type: Grant
    Filed: November 14, 2018
    Date of Patent: October 27, 2020
    Assignee: INSTITUTE OF AUTOMATION, CHINESE ACADEMY OF SCIENCES
    Inventors: Jiaming Xu, Jing Shi, Bo Xu
  • Patent number: 10796712
    Abstract: The disclosure provides a method and an apparatus for detecting a voice activity in an input audio signal composed of frames. A noise characteristic of the input signal is determined based on a received frame of the input audio signal. A voice activity detection (VAD) parameter is derived based on the noise characteristic of the input audio signal using an adaptive function. The derived VAD parameter is compared with a threshold value to provide a voice activity detection decision. The input audio signal is processed according to the voice activity detection decision.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: October 6, 2020
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Zhe Wang
  • Patent number: 10796697
    Abstract: Systems and methods are provided for associating meetings with projects. Some implementations include evaluating a similarity between a conversation between two or more users and a set of keywords characterizing at least one project associated with a user of the two or more users, where the conversation is captured by sensor data. Based on the similarity, a listening mode is activated on a user device associated with the user. Contextual information associated with the conversation is generated from portions of the sensor data provided by the activated listening mode. A meeting corresponding to the conversation is assigned to a project associated with the user based on the contextual information. Content is personalized to the user based on the assignment of the meeting to the project.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: October 6, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Haim Somech, Ido Priness, Dikla Dotan-Cohen
  • Patent number: 10777201
    Abstract: A server is provided, including a processor configured to execute a bot server program. The bot server program may receive from a computing device an input with an input type that includes one or more of speech and text. The bot server program may programmatically generate an output, wherein the output is generated based on the input. The bot server program may detect one or more output types capable of being output by the computing device and select an output type from a plurality of output types that may include speech and text. The selected output type may be an output type capable of being output by the computing device. The bot server program may modify the programmatically generated output to produce a modified output with the selected output type, and may convey the modified output to the computing device for output on a display and/or speaker.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: September 15, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adina Magdalena Trufinescu, Khuram Shahid, Daniel J. Driscoll, Adarsh Sridhar
  • Patent number: 10776070
    Abstract: There is provided an information processing device, control method, and program that can improve convenience of a speech recognition system by deciding an appropriate response output method in accordance with a current surrounding environment. A response to a speech from a user is generated, a response output method is decided in accordance with a current surrounding environment, and control is performed such that the generated response is output by using the decided response output method.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: September 15, 2020
    Assignee: SONY CORPORATION
    Inventor: Junki Ohmura
  • Patent number: 10777202
    Abstract: An exemplary speech presentation system receives a simulated binaural audio signal associated with a media player device that is presenting an artificial reality world to a user. The simulated binaural audio signal is representative of a simulation of sound propagating to an avatar representing the user within the artificial reality world. The speech presentation system further receives acoustic propagation data representative of an aspect affecting propagation of sound to the avatar within the artificial reality world. Based on the acoustic propagation data, the speech presentation system extracts an auto-transcribable speech signal from the simulated binaural audio signal. The auto-transcribable speech signal is representative of speech originating from a speaker within the artificial reality world. Based on the auto-transcribable speech signal, the speech presentation system generates a closed captioning dataset representative of the speech and provides the dataset to the media player device.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: September 15, 2020
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Samuel Charles Mindlin, Kunal Jathal, Mohammad Raheel Khalid
  • Patent number: 10765956
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving a plurality of word strings in a first language, each received word string comprising a plurality of words, identifying one or more named entities in each received word string using a statistical classifier that was trained using training data comprising a plurality of features, wherein one of the features is a word shape feature that comprises a respective token for each letter of a respective word wherein each token signifies a case of the letter or whether the letter is a digit, and translating the received word strings from the first language to a second language including preserving the respective identified named entities in each received word string during translation.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: September 8, 2020
    Assignee: Machine Zone Inc.
    Inventors: Nikhil Bojja, Shivasankari Kannan, Pidong Wang
  • Patent number: 10762908
    Abstract: An audio packet error concealment system includes an encoding unit for encoding an audio signal consisting of a plurality of frames, and an auxiliary information encoding unit for estimating and encoding auxiliary information about a temporal change of power of the audio signal. The auxiliary information is used in packet loss concealment in decoding of the audio signal. The auxiliary information about the temporal change of power may contain a parameter that functionally approximates a plurality of powers of subframes shorter than one frame, or may contain information about a vector obtained by vector quantization of a plurality of powers of subframes shorter than one frame.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: September 1, 2020
    Assignee: NTT DOCOMO, INC.
    Inventors: Kimitaka Tsutsumi, Kei Kikuiri
  • Patent number: 10750015
    Abstract: A virtual assistant device is configured to perform operations that include receiving a user request of a user to use a service provided by a service provider, and based on an identifier associated with the service provider, determining a set of authentication credential types that the service provider accepts for authentication. The operations also include determining, based on sensor information collected from one or more hardware sensors, whether one or more other people besides the user are within a proximity to the device. Further, the operations include based on a calculated risk security level, selecting a first authentication credential type from the set of authentication credential types and selecting a first communication mode from a set of communication modes.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: August 18, 2020
    Assignee: PAYPAL, INC.
    Inventors: Jiri Medlen, Anush Vishwanath, Braden Christopher Ericson, Michael Charles Todasco, Cheng Tian, Gautam Madaan, Titus Woo
  • Patent number: 10748536
    Abstract: An electronic device includes: a first processing circuit configured to detect voice from ambient sounds in first processing, and when detecting voice after a state of detecting no voice continues for a first period of time or longer, shift procedure to second processing; a second processing circuit configured to determine whether the voice detected from the ambient sound includes a specific word or not in the second processing, while being configured, when the specific word appears within a second period of time after the shifting to the second processing, to shift the procedure to third processing, and, when the specific word does not appear within the second period of time, not to shift the procedure to the third processing; and a third processing circuit configured to activate the specific function in the third processing.
    Type: Grant
    Filed: August 21, 2018
    Date of Patent: August 18, 2020
    Assignee: LENOVO (SINGAPORE) PTE. LTD.
    Inventors: Hidehisa Mori, Masaharu Yoneda, Koji Kawakita, Toshikazu Horino
  • Patent number: 10733375
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. An example process receives natural language input and determines a first and a second parsing result for the natural language input. The first and the second parsing results include respective mappings of one or more properties of a domain corresponding to the natural language input to one or more words of the natural language input. The process determines whether the second parsing result corresponds to a data item in a knowledge base, and in accordance with determining that the second parsing result corresponds to the data item in the knowledge base, the process ranks the second parsing result higher than the first parsing result. Based on the ranking, the process generates a task flow using the second parsing result and executes the task flow to provide an output based on the data item.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: August 4, 2020
    Assignee: Apple Inc.
    Inventors: Lin Li, Deepak Muralidharan, Xiao Yang, Justine Kao, Lavanya Colinjivadi Viswanathan, Mubarak Ali Seyed Ibrahim, Ashish Garg
  • Patent number: 10720151
    Abstract: Systems and methods are disclosed for end-to-end neural networks for speech recognition and classification and additional machine learning techniques that may be used in conjunction or separately. Some embodiments comprise multiple neural networks, directly connected to each other to form an end-to-end neural network. One embodiment comprises a convolutional network, a first fully-connected network, a recurrent network, a second fully-connected network, and an output network. Some embodiments are related to generating speech transcriptions, and some embodiments relate to classifying speech into a number of classifications.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: July 21, 2020
    Assignee: Deepgram, Inc.
    Inventors: Adam Sypniewski, Jeff Ward, Scott Stephenson
  • Patent number: 10714113
    Abstract: An objective of the present invention is to correct a temporal envelope shape of a decoded signal with a small information volume and to reduce perceptible distortions.
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: July 14, 2020
    Assignee: NTT DOCOMO, INC.
    Inventors: Kei Kikuiri, Atsushi Yamaguchi
  • Patent number: 10714081
    Abstract: Systems, methods, and computer-readable media are disclosed for dynamic voice assistant interaction. Example methods may include receiving first voice data, determining a first meaning of the first voice data, conducting an auction for an audio segment to present in response to the first voice data, wherein the auction is based at least in part on the first meaning, and determining a first audio response for presentation via a speaker in response to the first voice data.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: July 14, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: John Martin Miller, Michael Lee Loritsch, Ross Tucker
  • Patent number: 10706233
    Abstract: Provided is a computer implemented method including receiving a digital communication; analyzing said communication using natural language processing to identify any semantic reference to one or more digital artifacts; and identifying and locating the one or more digital artifacts. In some embodiments one or more digital artifacts are not specifically identified in the digital communication. In some embodiments one or more digital artifacts are not specifically included in the digital communication. Related apparatus, systems, techniques, and articles are also described.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: July 7, 2020
    Assignee: M-Files Oy
    Inventors: Trevor Cookson, Jayson deVries, Mostafa Karamibekr, Glenn Owen, Ramanpreet Singh, Christopher Towler
  • Patent number: 10698951
    Abstract: A method of automatically generating a digital soundtrack intended for synchronised playback with associated speech audio, the method executed by a processing device or devices having associated memory. The method comprises syntactically and/or semantically analysing text representing or corresponding to the speech audio at a text segment level to generate an emotional profile for each text segment in the context of a continuous emotion model. The method further comprises generating a soundtrack for the speech audio comprising one or more audio regions that are configured or selected for playback during corresponding speech regions of the speech audio, and wherein the audio configured for playback in the audio regions is based on or a function of the emotional profile of one or more of the text segments within the respective speech regions.
    Type: Grant
    Filed: July 28, 2017
    Date of Patent: June 30, 2020
    Assignee: Booktrack Holdings Limited
    Inventors: Paul Charles Cameron, Craig Andrew Wilson, Petrus Matheus Godefridus De Vocht, Brock David Moore
  • Patent number: 10679618
    Abstract: An approach for controlling method of an electronic device is provided. The approach acquires voice information and image information for setting an action to be executed according to a condition, the voice information and the image information being respectively generated from a voice and a behavior associated with the voice of a user. The approach determines an event to be detected according to the condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and the acquired image information. The approach determines at least one detection resource to detect the determined event. In response to the at least one determined detection resource detecting at least one event satisfying the condition, the approach executes the function according to the action.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: June 9, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young-chul Sohn, Gyu-tae Park, Ki-beom Lee, Jong-ryul Lee
  • Patent number: 10664665
    Abstract: Computer-implemented techniques can include receiving a selected word in a source language, obtaining one or more parts of speech for the selected word, and for each of the one or more parts-of-speech, obtaining candidate translations of the selected word to a different target language, each candidate translation corresponding to a particular semantic meaning of the selected word. The techniques can include for each semantic meaning of the selected word: obtaining an image corresponding to the semantic meaning of the selected word, and compiling translation information including (i) the semantic meaning, (ii) a corresponding part-of-speech, (iii) the image, and (iv) at least one corresponding candidate translation. The techniques can also include outputting the translation information.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: May 26, 2020
    Assignee: Google LLC
    Inventors: Alexander Jay Cuthbert, Barak Turovsky
  • Patent number: 10656830
    Abstract: The proposed invention relates to the field of inputting simplified Chinese characters, as well as characters of other writing systems that are based on the Chinese characters writing system. The invention essentially offers increased efficiency and speed of inputting characters.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: May 19, 2020
    Inventor: Boris Mikhailovich Putko