Patents Examined by Charlotte M. Baker
  • Patent number: 11227187
    Abstract: Artificial intelligence systems are created for end users based on raw data received from the end users or obtained from any source. Training, validation and testing data is maintained securely and subject to authentication prior to use. A machine learning model is selected for providing solutions of any type or form and trained, verified and tested by an artificial intelligence engine using such data. A trained model is distributed to end users, and feedback regarding the performance of the trained model is returned to the artificial intelligence engine, which updates the model on account of such feedback before redistributing the model to the end users. When an end user provides data to an artificial intelligence engine and requests a trained model, the end user monitors progress of the training of the model, along with the performance of the model in providing quality artificial intelligence solutions, via one or more dashboards.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: January 18, 2022
    Assignee: Augustus Intelligence Inc.
    Inventor: Pascal Christian Weinberger
  • Patent number: 11227179
    Abstract: An apparatus, method, system and computer readable medium for video tracking. An exemplar crop is selected to be tracked in an initial frame of a video. Bayesian optimization is applied with each subsequent frame of the video by building a surrogate model of an objective function using Gaussian Process Regression (GPR) based on similarity scores of candidate crops collected from a search space in a current frame of the video. A next candidate crop in the search space is determined using an acquisition function. The next candidate crop is compared to the exemplar crop using a Siamese neural network. Comparisons of new candidate crops to the exemplar crop are made using the Siamese neural network until the exemplar crop has been found in the current frame. The new candidate crops are selected based on an updated surrogate model.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: January 18, 2022
    Assignee: Intel Corporation
    Inventors: Anthony Rhodes, Manan Goel
  • Patent number: 11227183
    Abstract: A data extraction and expansion system receives documents with data to be processed, extracts a set of a specific type of entities from the received documents, expands the set of entities by retrieving additional entities of the specific type from an ontology and other external data sources to improve the match between the received documents. The ontology includes data regarding entities and relationships between entities. The ontology is built by extracting the entity and relationship information from external data sources and can be constantly updated. If the additional entities to expand the set of entities cannot be retrieved from the ontology then a real-time search of the external data sources is executed to retrieve the additional entities from the external data sources.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: January 18, 2022
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Colin Connors, Ditty Mathew, Emmanuel Munguia Tapia, Anwitha Paruchuri, Anshuma Chandak, Tsunghan Wu
  • Patent number: 11222247
    Abstract: Computer-implemented techniques for sematic image retrieval. According to one technique, digital images are classified into N number of categories based on their visual content. The classification provides a set of N-dimensional image vectors for the digital images. Each image vector contains up to N number of probability values for up to N number of corresponding categories. An N-dimensional image match vector is generated that projects an input keyword query into the vector space of the set of image vectors by computing the vector similarities between a word vector for the input query and a word vector for each of the N number of categories. Vector similarities between the image match vectors and the set of image vectors can be computed to determine images semantically relevant to the input query.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: January 11, 2022
    Assignee: Dropbox, Inc.
    Inventors: Thomas Berg, Peter Neil Belhumeur
  • Patent number: 11216497
    Abstract: The disclosure relates to an artificial intelligence (AI) system for simulating human brain functions such as perception and judgement by using a machine learning algorithm such as deep learning, and an application thereof. An operation method of an electronic device comprises the steps of: receiving an input message; determining a user's language information included in the input message; determining language information for a response corresponding to the user's language information; and outputting the response on the basis of the language information for the response.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Pawel Bujnowski, Dawid Wisniewski, Hee Sik Jeon, Joanna Ewa Marhula, Katarzyna Beksa, Maciej Zembrzuski
  • Patent number: 11216729
    Abstract: A recognition method includes: receiving a training voice or a training image; and extracting a plurality of voice features in the training voice, or extracting a plurality of image features in the training image; wherein when extracting the voice features, a specific number of voice parameters are generated according to the voice features, and the voice parameters are input into a deep neural network (DNN) to generate a recognition model. When extracting the image features, the specific number of image parameters are generated according to the image features, and the image parameters are input into the deep neural network to generate the recognition model.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: January 4, 2022
    Assignee: Nuvoton Technology Corporation
    Inventors: Woan-Shiuan Chien, Tzu-Lan Shen
  • Patent number: 11210462
    Abstract: Systems and methods are described for processing voice input to detect and remove voice recognition errors in the context of a product attribute query. Spoken-word input may be processed to tentatively identify a query regarding a product and an attribute. A hierarchical product catalog is then used to identify categories that include the identified product, and an affinity score is determined for each category to indicate the relative strength of the relationship between the category and the attribute. The affinity score for each category is determined based on historical questions submitted to a question and answer service with regard to other products in the category. An affinity score for the product-attribute pairing is then determined based on a weighted average of the affinity scores for the product categories, and the affinity score is used to determine whether the question is valid and the voice input has been correctly processed.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: December 28, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ayan Sircar, Abhishek Mehrotra, Aniruddha Deshpande, Padmini Rajanna, Pawan Kaunth, Vaibhav Jain
  • Patent number: 11203362
    Abstract: Enclosed are embodiments for scoring one or more trajectories of a vehicle through a given traffic scenario using a machine learning model that predicts reasonableness scores for the trajectories. In an embodiment, human annotators, referred to as a “reasonable crowd,” are presented with renderings of two or more vehicle trajectories traversing through the same or different traffic scenarios. The annotators are asked to indicate their preference for one trajectory over the other(s). Inputs collected from the human annotators are used to train the machine learning model to predict reasonableness scores for one or more trajectories for a given traffic scenario. These predicted trajectories can be used to rank trajectories generated by a route planner based on their scores, compare AV software stacks, or used by any other application that could benefit from a machine learning model that scores vehicle trajectories.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: December 21, 2021
    Assignee: Motional AD LLC
    Inventors: Oscar Olof Beijbom, Bassam Helou, Radboud Duintjer Tebbens, Calin Belta, Anne Collin, Tichakorn Wongpiromsarn
  • Patent number: 11200468
    Abstract: The present subject matter provides various technical solutions to technical problems facing advanced driver assistance systems (ADAS) and autonomous vehicle (AV) systems. In particular, disclosed embodiments provide systems and methods that may use cameras and other sensors to detect objects and events and identify them as predefined signal classifiers, such as detecting and identifying a red stoplight. These signal classifiers are used within ADAS and AV systems to control the vehicle or alert a vehicle operator based on the type of signal. These ADAS and AV systems may provide full vehicle operation without requiring human input. The embodiments disclosed herein provide systems and methods that can be used as part of or in combination with ADAS and AV systems.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: December 14, 2021
    Assignee: Mobileye Vision Technologies Ltd.
    Inventors: Eran Malach, Yaakov Shambik, Jacob Bentolila, Idan Geller
  • Patent number: 11200676
    Abstract: Systems and methods of improving alignment in dense prediction neural networks are disclosed. A method includes identifying, at a computing system, an input data set and a label data set with one or more first parts of the input data set corresponding to a label. The computing system processes the input data set using a neural network to generate a predicted label data set that identifies one or more second parts of the input data set predicted to correspond to the label. The computing system determines an alignment result using the predicted label data set and the label data set and a transformation of the one or more first parts, including a shift, rotation, scaling, and/or deformation, based on the alignment result. The computing system computes a loss score using the transformation, label data and the predicted label data set and updates the neural network based on the loss score.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: December 14, 2021
    Assignee: VERILY LIFE SCIENCES LLC
    Inventors: Cheng-Hsun Wu, Ali Behrooz
  • Patent number: 11188746
    Abstract: Disclosed are systems and methods for extracting content based on image analysis. A method may include receiving content including at least an image depicting a coupon; converting the received content into a larger image including the image depicting the coupon; determining, utilizing one or more neural networks, the image depicting the coupon within the larger image, wherein determining the image depicting the coupon comprises: segmenting a foreground bounding box including the image depicting the coupon from background image portions of the image; cropping the larger image based on the bounding box, wherein the cropped image consists of the image depicting the coupon; determining text included in the cropped image; and extracting information included in the coupon based on the determined text.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: November 30, 2021
    Assignee: Verizon Media Inc.
    Inventors: Umang Patel, Sridharan Palaniappan, Rofaida Abdelaal, Chun-Han Yao
  • Patent number: 11188775
    Abstract: Using sensor hubs for tracking an object. One system includes a first sensor hub and a second sensor hub. The first sensor hub includes a first audio sensor and a first electronic processor. In response to determining that one or more words captured by the first audio sensor is included in the list of trigger words, the first electronic processor generates a first voice signature of a voice of an unidentified person, generates a tracking profile, and transmits the tracking profile to the second sensor hub. The second sensor hub receives the tracking profile and includes a second electronic processor, a second audio sensor, and a camera. In response to determining that a second voice signature matches the first voice signature, the second electronic processor is configured to determine a visual characteristic of the unidentified person based on an image from the camera and update the tracking profile.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: November 30, 2021
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Shervin Sabripour, Goktug Duman, John B. Preston, Belfug Sener, Bert Van Der Zaag
  • Patent number: 11182922
    Abstract: Provided is an AI apparatus for determining a location of a user including: a communication unit configured to communicate with at least one external AI apparatus obtaining first image data and first sound data; a memory configured to store location information on the at least one external AI apparatus and the AI apparatus; a camera configured to obtain second image data; a microphone configured to obtain second sound data; and a processor configured to: generate first recognition information on the user based on the second image data; generate second recognition information on the user based on the second sound data; obtain, from the at least one external AI apparatus, third recognition information on the user generated based on the first image data and fourth recognition information on the user generated based on the first sound data; determine the user's location based on the location information, the first recognition information, and the third recognition information; and calibrate the determined user's
    Type: Grant
    Filed: June 5, 2019
    Date of Patent: November 23, 2021
    Assignee: LG ELECTRONICS INC.
    Inventor: Sihyuk Yi
  • Patent number: 11182433
    Abstract: A question and answer (Q&A) system is enhanced to support natural language queries into any document format regardless of where the underlying documents are stored. The Q&A system may be implemented “as-a-service,” e.g., a network-accessible information retrieval platform. Preferably, the techniques herein enable a user to quickly and reliably locate a document, page, chart, or data point that he or she is looking for across many different datasets. This provides for a unified view of all of the user's (or, more generally, an enterprise's) information assets (such as Adobe® PDFs, Microsoft® Word documents, Microsoft Excel spreadsheets, Microsoft PowerPoint presentations, Google Docs, scanned materials, etc.), and to be able to deeply search all of these sources for the right document, page, sheet, chart, or even answer to a question.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: November 23, 2021
    Assignee: Searchable AI Corp
    Inventors: Aaron Sisto, Nick Martin, Brian Shin, Hung Nguyen
  • Patent number: 11183191
    Abstract: An information processing apparatus includes a processor. The processor is configured to identify, from a character string recognition result for a form, a form feature that indicates at least a field in which the form is used or an attribute of a filling-out person filling out the form, accumulate past correction tendencies for character string recognition results for forms having respective identified form features, and obtain a correction tendency for a form having a form feature that is the same as the identified form feature from among the accumulated correction tendencies, and perform control to display a candidate correct expression for the character string recognition result for the form in accordance with the obtained correction tendency.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: November 23, 2021
    Assignee: FUJIFILM Business Innovation Corp.
    Inventor: Mami Iwanari
  • Patent number: 11183188
    Abstract: Various embodiments discussed herein enable applications to seamlessly contribute to executing voice commands of users via voice assistant functionality. In response to receiving a user request to open an application or web page, the application can request and responsively receive a voice assistant runtime component along with the application or web page. The application, using a particular universal application interface component can compile or interpret the voice assistant runtime component from a source code format to an intermediate code format. In response to the application or web page being rendered and the detection of a key word or phrase, the application can activate voice assistant command execution functionality. The user can issue a voice command after which the application along with specific services can help execute the voice command.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: November 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rene Huangtian Brandel, Jason Eric Voldseth, Biao Kuang
  • Patent number: 11176462
    Abstract: A system and method for computationally tractable prediction of protein-ligand interactions and their bioactivity. According to an embodiment, the system and method comprise two machine learning processing streams and concatenating their outputs. One of the machine learning streams is trained using information about ligands and their bioactivity interactions with proteins. The other machine learning stream is trained using information about proteins and their bioactivity interactions with ligands. After the machine learning algorithms for each stream have been trained, they can be used to predict the bioactivity of a given protein-ligand pair by inputting a specified ligand into the ligand processing stream and a specified protein into the protein processing stream. The machine learning algorithms of each stream predict possible protein-ligand bioactivity interactions based on the training data.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: November 16, 2021
    Assignee: Ro5 Inc.
    Inventors: Orestis Bastas, Alwin Bucher, Aurimas Pabrinkis, Mikhail Demtchenko, Zeyu Yang, Cooper Stergis Jamieson, {circumflex over (Z)}ygimantas Joĉys, Roy Tal, Charles Dazler Knuff
  • Patent number: 11170210
    Abstract: A gesture identification method includes: performing gesture information detection on an image by means of a neural network, to obtain a potential hand region, a potential gesture category and a potential gesture category probability in the image, the potential gesture category including a gesture-free category and at least one gesture category; and if the obtained potential gesture category with the maximum probability is the gesture-free category, not outputting position information of the potential hand region of the image; or otherwise, outputting the position information of the potential hand region of the image and the obtained potential gesture category with the maximum probability.
    Type: Grant
    Filed: September 29, 2019
    Date of Patent: November 9, 2021
    Assignee: Beijing SenseTime Technology Development Co., Ltd.
    Inventors: Quan Wang, Wentao Liu, Chen Qian
  • Patent number: 11164045
    Abstract: Disclosed herein are systems, methods, and software for providing a platform for complex image data analysis using artificial intelligence and/or machine learning algorithms. One or more subsystems allow for the capturing of user input such as eye gaze and dictation for automated generation of findings. Additional features include quality metric tracking and feedback, and worklist management system and communications queueing.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: November 2, 2021
    Assignee: SIRONA MEDICAL, INC.
    Inventors: David Seungwon Paik, Vernon Marshall, Mark D. Longo, Cameron Andrews, Kojo Worai Osei, Berk Norman, Ankit Goyal
  • Patent number: 11163986
    Abstract: Some embodiments provide a method for training a machine-trained (MT) network that processes inputs using network parameters. The method propagates a set of input training items through the MT network to generate a set of output values. The set of input training items comprises multiple training items for each of multiple categories. The method identifies multiple training item groupings in the set of input training items. Each grouping includes at least two training items in a first category and at least one training item in a second category. The method calculates a value of a loss function as a summation of individual loss functions for each of the identified training item groupings. The individual loss function for each particular training item grouping is based on the output values for the training items of the grouping. The method trains the network parameters using the calculated loss function value.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: November 2, 2021
    Assignee: PERCEIVE CORPORATION
    Inventors: Eric A. Sather, Steven L. Teig, Andrew C. Mihal