Patents Examined by Charlotte M. Baker
  • Patent number: 11216729
    Abstract: A recognition method includes: receiving a training voice or a training image; and extracting a plurality of voice features in the training voice, or extracting a plurality of image features in the training image; wherein when extracting the voice features, a specific number of voice parameters are generated according to the voice features, and the voice parameters are input into a deep neural network (DNN) to generate a recognition model. When extracting the image features, the specific number of image parameters are generated according to the image features, and the image parameters are input into the deep neural network to generate the recognition model.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: January 4, 2022
    Assignee: Nuvoton Technology Corporation
    Inventors: Woan-Shiuan Chien, Tzu-Lan Shen
  • Patent number: 11216497
    Abstract: The disclosure relates to an artificial intelligence (AI) system for simulating human brain functions such as perception and judgement by using a machine learning algorithm such as deep learning, and an application thereof. An operation method of an electronic device comprises the steps of: receiving an input message; determining a user's language information included in the input message; determining language information for a response corresponding to the user's language information; and outputting the response on the basis of the language information for the response.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Pawel Bujnowski, Dawid Wisniewski, Hee Sik Jeon, Joanna Ewa Marhula, Katarzyna Beksa, Maciej Zembrzuski
  • Patent number: 11210462
    Abstract: Systems and methods are described for processing voice input to detect and remove voice recognition errors in the context of a product attribute query. Spoken-word input may be processed to tentatively identify a query regarding a product and an attribute. A hierarchical product catalog is then used to identify categories that include the identified product, and an affinity score is determined for each category to indicate the relative strength of the relationship between the category and the attribute. The affinity score for each category is determined based on historical questions submitted to a question and answer service with regard to other products in the category. An affinity score for the product-attribute pairing is then determined based on a weighted average of the affinity scores for the product categories, and the affinity score is used to determine whether the question is valid and the voice input has been correctly processed.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: December 28, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ayan Sircar, Abhishek Mehrotra, Aniruddha Deshpande, Padmini Rajanna, Pawan Kaunth, Vaibhav Jain
  • Patent number: 11203362
    Abstract: Enclosed are embodiments for scoring one or more trajectories of a vehicle through a given traffic scenario using a machine learning model that predicts reasonableness scores for the trajectories. In an embodiment, human annotators, referred to as a “reasonable crowd,” are presented with renderings of two or more vehicle trajectories traversing through the same or different traffic scenarios. The annotators are asked to indicate their preference for one trajectory over the other(s). Inputs collected from the human annotators are used to train the machine learning model to predict reasonableness scores for one or more trajectories for a given traffic scenario. These predicted trajectories can be used to rank trajectories generated by a route planner based on their scores, compare AV software stacks, or used by any other application that could benefit from a machine learning model that scores vehicle trajectories.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 21, 2021
    Assignee: Motional AD LLC
    Inventors: Oscar Olof Beijbom, Bassam Helou, Radboud Duintjer Tebbens, Calin Belta, Anne Collin, Tichakorn Wongpiromsarn
  • Patent number: 11200468
    Abstract: The present subject matter provides various technical solutions to technical problems facing advanced driver assistance systems (ADAS) and autonomous vehicle (AV) systems. In particular, disclosed embodiments provide systems and methods that may use cameras and other sensors to detect objects and events and identify them as predefined signal classifiers, such as detecting and identifying a red stoplight. These signal classifiers are used within ADAS and AV systems to control the vehicle or alert a vehicle operator based on the type of signal. These ADAS and AV systems may provide full vehicle operation without requiring human input. The embodiments disclosed herein provide systems and methods that can be used as part of or in combination with ADAS and AV systems.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: December 14, 2021
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Eran Malach, Yaakov Shambik, Jacob Bentolila, Idan Geller
  • Patent number: 11200676
    Abstract: Systems and methods of improving alignment in dense prediction neural networks are disclosed. A method includes identifying, at a computing system, an input data set and a label data set with one or more first parts of the input data set corresponding to a label. The computing system processes the input data set using a neural network to generate a predicted label data set that identifies one or more second parts of the input data set predicted to correspond to the label. The computing system determines an alignment result using the predicted label data set and the label data set and a transformation of the one or more first parts, including a shift, rotation, scaling, and/or deformation, based on the alignment result. The computing system computes a loss score using the transformation, label data and the predicted label data set and updates the neural network based on the loss score.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: December 14, 2021
    Assignee: VERILY LIFE SCIENCES LLC
    Inventors: Cheng-Hsun Wu, Ali Behrooz
  • Patent number: 11188775
    Abstract: Using sensor hubs for tracking an object. One system includes a first sensor hub and a second sensor hub. The first sensor hub includes a first audio sensor and a first electronic processor. In response to determining that one or more words captured by the first audio sensor is included in the list of trigger words, the first electronic processor generates a first voice signature of a voice of an unidentified person, generates a tracking profile, and transmits the tracking profile to the second sensor hub. The second sensor hub receives the tracking profile and includes a second electronic processor, a second audio sensor, and a camera. In response to determining that a second voice signature matches the first voice signature, the second electronic processor is configured to determine a visual characteristic of the unidentified person based on an image from the camera and update the tracking profile.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 30, 2021
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Shervin Sabripour, Goktug Duman, John B. Preston, Belfug Sener, Bert Van Der Zaag
  • Patent number: 11188746
    Abstract: Disclosed are systems and methods for extracting content based on image analysis. A method may include receiving content including at least an image depicting a coupon; converting the received content into a larger image including the image depicting the coupon; determining, utilizing one or more neural networks, the image depicting the coupon within the larger image, wherein determining the image depicting the coupon comprises: segmenting a foreground bounding box including the image depicting the coupon from background image portions of the image; cropping the larger image based on the bounding box, wherein the cropped image consists of the image depicting the coupon; determining text included in the cropped image; and extracting information included in the coupon based on the determined text.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: November 30, 2021
    Assignee: Verizon Media Inc.
    Inventors: Umang Patel, Sridharan Palaniappan, Rofaida Abdelaal, Chun-Han Yao
  • Patent number: 11183188
    Abstract: Various embodiments discussed herein enable applications to seamlessly contribute to executing voice commands of users via voice assistant functionality. In response to receiving a user request to open an application or web page, the application can request and responsively receive a voice assistant runtime component along with the application or web page. The application, using a particular universal application interface component can compile or interpret the voice assistant runtime component from a source code format to an intermediate code format. In response to the application or web page being rendered and the detection of a key word or phrase, the application can activate voice assistant command execution functionality. The user can issue a voice command after which the application along with specific services can help execute the voice command.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: November 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rene Huangtian Brandel, Jason Eric Voldseth, Biao Kuang
  • Patent number: 11182433
    Abstract: A question and answer (Q&A) system is enhanced to support natural language queries into any document format regardless of where the underlying documents are stored. The Q&A system may be implemented “as-a-service,” e.g., a network-accessible information retrieval platform. Preferably, the techniques herein enable a user to quickly and reliably locate a document, page, chart, or data point that he or she is looking for across many different datasets. This provides for a unified view of all of the user's (or, more generally, an enterprise's) information assets (such as Adobe® PDFs, Microsoft® Word documents, Microsoft Excel spreadsheets, Microsoft PowerPoint presentations, Google Docs, scanned materials, etc.), and to be able to deeply search all of these sources for the right document, page, sheet, chart, or even answer to a question.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: November 23, 2021
    Assignee: Searchable AI Corp
    Inventors: Aaron Sisto, Nick Martin, Brian Shin, Hung Nguyen
  • Patent number: 11183191
    Abstract: An information processing apparatus includes a processor. The processor is configured to identify, from a character string recognition result for a form, a form feature that indicates at least a field in which the form is used or an attribute of a filling-out person filling out the form, accumulate past correction tendencies for character string recognition results for forms having respective identified form features, and obtain a correction tendency for a form having a form feature that is the same as the identified form feature from among the accumulated correction tendencies, and perform control to display a candidate correct expression for the character string recognition result for the form in accordance with the obtained correction tendency.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: November 23, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Mami Iwanari
  • Patent number: 11182922
    Abstract: Provided is an AI apparatus for determining a location of a user including: a communication unit configured to communicate with at least one external AI apparatus obtaining first image data and first sound data; a memory configured to store location information on the at least one external AI apparatus and the AI apparatus; a camera configured to obtain second image data; a microphone configured to obtain second sound data; and a processor configured to: generate first recognition information on the user based on the second image data; generate second recognition information on the user based on the second sound data; obtain, from the at least one external AI apparatus, third recognition information on the user generated based on the first image data and fourth recognition information on the user generated based on the first sound data; determine the user's location based on the location information, the first recognition information, and the third recognition information; and calibrate the determined user's
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: November 23, 2021
    Assignee: LG ELECTRONICS INC.
    Inventor: Sihyuk Yi
  • Patent number: 11176462
    Abstract: A system and method for computationally tractable prediction of protein-ligand interactions and their bioactivity. According to an embodiment, the system and method comprise two machine learning processing streams and concatenating their outputs. One of the machine learning streams is trained using information about ligands and their bioactivity interactions with proteins. The other machine learning stream is trained using information about proteins and their bioactivity interactions with ligands. After the machine learning algorithms for each stream have been trained, they can be used to predict the bioactivity of a given protein-ligand pair by inputting a specified ligand into the ligand processing stream and a specified protein into the protein processing stream. The machine learning algorithms of each stream predict possible protein-ligand bioactivity interactions based on the training data.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: November 16, 2021
    Assignee: Ro5 Inc.
    Inventors: Orestis Bastas, Alwin Bucher, Aurimas Pabrinkis, Mikhail Demtchenko, Zeyu Yang, Cooper Stergis Jamieson, {circumflex over (Z)}ygimantas Joĉys, Roy Tal, Charles Dazler Knuff
  • Patent number: 11170210
    Abstract: A gesture identification method includes: performing gesture information detection on an image by means of a neural network, to obtain a potential hand region, a potential gesture category and a potential gesture category probability in the image, the potential gesture category including a gesture-free category and at least one gesture category; and if the obtained potential gesture category with the maximum probability is the gesture-free category, not outputting position information of the potential hand region of the image; or otherwise, outputting the position information of the potential hand region of the image and the obtained potential gesture category with the maximum probability.
    Type: Grant
    Filed: September 29, 2019
    Date of Patent: November 9, 2021
    Assignee: Beijing SenseTime Technology Development Co., Ltd.
    Inventors: Quan Wang, Wentao Liu, Chen Qian
  • Patent number: 11163986
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: November 2, 2021
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal
  • Patent number: 11164045
    Abstract: Disclosed herein are systems, methods, and software for providing a platform for complex image data analysis using artificial intelligence and/or machine learning algorithms. One or more subsystems allow for the capturing of user input such as eye gaze and dictation for automated generation of findings. Additional features include quality metric tracking and feedback, and worklist management system and communications queueing.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: November 2, 2021
    Assignee: SIRONA MEDICAL, INC.
    Inventors: David Seungwon Paik, Vernon Marshall, Mark D. Longo, Cameron Andrews, Kojo Worai Osei, Berk Norman, Ankit Goyal
  • Patent number: 11151385
    Abstract: A method for (of) detecting deception in an Audio-Video response of a user, using a server, in a distributed computing architecture, characterized in that the method including: enabling an Audio-Video connection with a user device upon receiving a request from a user; obtaining, from the user device, an Audio-Video response of the user corresponding to a first set of questions that are provided to the user by the server; extracting audio signals and video signals from the Audio-Video response; detecting an activity of the user by determining a plurality of Natural Language Processing (NLP) features from the extracted audio signals by (i) performing a speech to text translation and (ii) extracting the plurality of NLP features from the translated text, and determining a plurality of speech features from the extracted audio signals by (i) splitting the extracted audio signals into a plurality of short interval audio signals and (ii) extracting the plurality of speech features from the plurality of short interva
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: October 19, 2021
    Assignee: RTScaleAI Inc
    Inventors: Vivek Iyer, Peter Walker
  • Patent number: 11151998
    Abstract: An artificial intelligence device according to an embodiment of the present invention may include a microphone configured to receive voice; a sound output unit configured to output sound; an artificial intelligence unit configured to acquire context information of a target, based on at least one of an image received from a camera disposed outside and a voice received from the microphone, generate feedback information according to the acquired context information, and determine output volume intensity of the generated feedback information; and a controller configured to control the sound output unit to output the feedback information at the determined output volume intensity.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: October 19, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Jongwoo Han, Hangil Choi, Yoojin Choi
  • Patent number: 11144728
    Abstract: Provided is a computer-implemented method for inter-sententially determining a semantic relationship between a first entity and a second entity in a natural language document, comprising at least the steps of: generating a first dependency parse tree, DPT, for a first origin sentence of the document which comprises the first entity, wherein each DPT comprises at least a root node; generating a second DPT for a second origin sentence of the document which mentions the second entity; linking the root nodes of the first DPT and the second DPT so as to create a chain of words, COW; determining for each word in the COW a subtree; generating for each word in the COW a subtree embedding vector cw which is based at least on word embedding vectors xw of the words of the subtree; generating a representation vector pw for each word in the COW; and classifying, using a recurrent neural network, the semantic relationship between the first entity and the second entity, based on the input representation vectors pw.
    Type: Grant
    Filed: July 19, 2019
    Date of Patent: October 12, 2021
    Assignee: SIEMENS AKTIENGESELLSCHAFT
    Inventors: Bernt Andrassy, Pankaj Gupta, Subburam Rajaram, Thomas Runkler
  • Patent number: 11144789
    Abstract: Provided is a model parameter learning device and the like capable of learning model parameters such that the influence of a noise in input data can be suppressed. A model parameter learning device (1) alternately carries out first learning processing for learning model parameters W1, b1, W2 and b2 such that an error between data Xout and data Xorg is minimized, and second learning processing for learning model parameters W1, b1, Wm, bm, Wq and bq such that a loss function LAE is minimized.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: October 12, 2021
    Assignees: HONDA MOTOR CO., LTD., KYOTO UNIVERSITY
    Inventors: Kosuke Nakanishi, Yuji Yasui, Wataru Sasaki, Shin Ishii