Patents Examined by Leon Flores
  • Patent number: 11334769
    Abstract: In an approach to augmenting caption datasets, one or more computer processors sample a ratio lambda from a probability distribution based on a pair of datapoints contained in a dataset, wherein each datapoint in the pair of datapoints comprises an image and an associated caption; extend the dataset by generating one or more new datapoints based on the sampled ratio lambda for each pair of datapoints in the dataset, wherein the sampled ratio lambda incorporates an interpolation of features associated with the pair of datapoints into the generated one or more new datapoints; identify one or more objects contained within a subsequent image utilizing an image model trained utilizing the extended dataset; generate a subsequent caption for one or more identified objects contained within the subsequent image utilizing a language generating model trained utilizing the extended dataset.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: Shiwan Zhao, Yi Ke Wu, Hao Kai Zhang, Zhong Su
  • Patent number: 11334978
    Abstract: A platform to accurately detect user pose/verify against a reference ground truth and provide feedback using an accuracy score that represents the deviation of the user pose from the reference ground truth, typically established by an expert.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: May 17, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Udupi Ramanath Bhat, Yasushi Okumura, Fabio Cappello
  • Patent number: 11334772
    Abstract: A label feature extraction means 71 extracts, from reference information, a label feature that is a vector representing a feature of the reference information. A label feature dimension reduction means 72 performs dimension reduction of the label feature. An image feature extraction means 73 extracts an image feature from a target image that is an image in which an object to be recognized is captured. A feature transformation means 74 performs feature transformation on the image feature in such a manner that comparison with the label feature after the dimension reduction becomes possible. The class recognition means 75 recognizes a class of the object to be recognized by comparing the image feature after the feature transformation with the label feature after the dimension reduction.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: May 17, 2022
    Assignee: NEC CORPORATION
    Inventors: Takahiro Toizumi, Yuzo Senda
  • Patent number: 11328179
    Abstract: An information processing apparatus includes a processor to input each sample image into feature extracting components to obtain at least two features of the sample image, and to cause a classifying component to calculate a classification loss of the sample image based on the at least two features; extract, from each pair of features, a plurality of sample pairs for calculating mutual information between each pair of features; input the plurality of sample pairs into a machine learning architecture corresponding to each pair of features, to calculate an information loss between each pair of features.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: May 10, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Wei Shen, Rujie Liu
  • Patent number: 11321815
    Abstract: A method for generating a digital image pair for training a neural network to correct noisy image components of noisy images includes determining an extent of object movements within an overlapping region of a stored first digital image and a stored second digital image of an environment of a mobile platform, and determining a respective acquired solid angle of the environment of the mobile platform of the first and second digital images. The method further includes generating the digital image pair from the first digital image and the second digital images, when the respective acquired solid angles of the environment of the first and the second digital image do not differ from one another by more than a defined difference, and the extent of the object movements within the overlapping region of the first and the second digital image is less than a defined value.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: May 3, 2022
    Assignee: Robert Bosch GmbH
    Inventor: Martin Meinke
  • Patent number: 11308367
    Abstract: A classification learning section executes training of a feature quantity extraction section and training of a classification section resulting from a comparison between an output generated when feature quantity data is inputted to the classification section and training data regarding a plurality of classes associated with a source domain training image. A dividing section divides feature quantity data outputted from the feature quantity extraction section in accordance with input of an image into a plurality of pieces of partial feature quantity data corresponding to the image including a feature map of one or more of the classes. A domain identification learning section executes training of the feature quantity extraction section resulting from a comparison between an output generated when partial feature quantity data corresponding to the image is inputted to a domain identification section and data indicating whether the image belongs to a source domain or to the target domain.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 19, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Daichi Ono
  • Patent number: 11308604
    Abstract: An embodiment provides a method of operating wire harness manufacturing equipment, including: adding, using the manufacturing equipment, an element to a wire to form a combination of the element and the wire; capturing, using an imaging device, an upper image and a lower image of the combination; analyzing, using one or more processors operatively coupled to the imaging device, the upper image and the lower image to detect a defect; and thereafter indicating that the defect has been detected. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 17, 2020
    Date of Patent: April 19, 2022
    Assignee: CRIMPING & STAMPING TECHNOLOGIES INC
    Inventor: Edgar Iván Sanchez Ortega
  • Patent number: 11308775
    Abstract: Methods and systems monitor activity in a retail environment, such as activity in a display area (e.g., aisle) of the retail environment. A convolutional neural network is used to detect objects (e.g., inventory items) or events (e.g., instances of people picking up inventory items from shelves). Various algorithms may be used to determine whether suspicious activity occurs, and/or to determine whether to trigger alerts. Monitored/detected activity may be stored in a database to facilitate a deeper understanding of operations within the retail environment.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: April 19, 2022
    Assignee: SAI GROUP LIMITED
    Inventors: Somnath Sinha, Abhijit Sanyal
  • Patent number: 11301976
    Abstract: An inspection support system comprising: determination devices that determine pass or fail based on a result of non-destructive inspection of the object; and a learning device that learns a determination algorithm used to determine pass or fail based on information collected from the determination devices. The determination device transmits an ultimate determination result yielded by an inspection person who has checked a determination result to the learning device along with the corresponding result of non-destructive inspection of the object. The learning device includes: a determination result reception unit that receives the ultimate determination result and the result of non-destructive inspection of the inspection object; a learning unit that learns the determination algorithm based on received information; and a provision unit that provides the learned determination algorithm to the determination devices.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: April 12, 2022
    Assignee: CHIYODA CORPORATION
    Inventors: Kazuya Furuichi, Akihito Ikarashi, Shizuka Ikawa, Kenichi Mimura
  • Patent number: 11301721
    Abstract: Various embodiments of the teachings herein include a method for training and updating a backend-side classifier comprising: receiving, in a backend-device, from at least one vehicle, classification data along with a respective classification result generated by a vehicle-side classifier; and training the backend-side classifier using the classification data and, if available, a corrected respective classification result as annotation.
    Type: Grant
    Filed: December 15, 2017
    Date of Patent: April 12, 2022
    Assignee: CONTINENTAL AUTOMOTIVE GMBH
    Inventors: Henning Hamer, Robert Thiel
  • Patent number: 11295153
    Abstract: The present disclosure relates to systems and methods for positioning a subject. The method may include generating a first image of the subject disposed on a scanning board of an imaging device. The first image may include position information of the subject. The method may further include generating a second image of the subject which includes information associated with one or more organs of the subjects. Additionally, the method may include determining the position of a ROI based on the first image and the second image. The method may further include operating the imaging device to scan a target portion of the subject.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: April 5, 2022
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Jianhui Cao, Lei Zhang, Yaguang Fu, Suming Wang, Longzi Yang
  • Patent number: 11288805
    Abstract: A computer-implemented method and a data processing apparatus provide and apply a trained probabilistic graphical model for verifying and/or improving the consistency of labels within the scope of medical image processing. Also provided are a computer-implemented method for verifying and/or improving the consistency of labels within the scope of medical imaging processing, a data processing apparatus embodied to verify and/or improve the consistency of labels within the scope of medical image processing, and a corresponding computer program product and a computer-readable medium.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: March 29, 2022
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventors: Markus Michael Geipel, Florian Büttner, Gaby Marquardt, Daniela Seidel, Christoph Tietz
  • Patent number: 11288542
    Abstract: An image is received for classification. Thereafter, features are extracted from the image which are used by a machine learning model to classify the image. Thereafter, data is provided that characterizes the classification. The machine learning model can be trained using a training data set labeled, in part, using a generative model conditioned on label attribute information in combination with a directed relation graph having a plurality of nodes in which each node without images at training time are given predefined probability distributions. Related apparatus, systems, techniques and articles are also described.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: March 29, 2022
    Assignee: SAP SE
    Inventors: Colin Samplawski, Jannik Wolff, Tassilo Klein, Moin Nabi
  • Patent number: 11282179
    Abstract: A system and method for assessing video quality of a video-based application inserts frame identifiers (IDs) into video content from the video-based application and recognizes the frame IDs from the video content using a text recognition neural network. Based on recognized frame IDs, a frame per second (FPS) metric of the video content is calculated. Based on the FPS metric of the video content, objective video quality of the video-based application is assessed.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: March 22, 2022
    Assignee: VMWARE, INC.
    Inventors: Lan Vu, Hari Sivaraman, Uday Pundalik Kurkure, Xuwen Yu
  • Patent number: 11275932
    Abstract: This application discloses a human attribute recognition method performed at a computing device. The method includes: determining a human body region image in a surveillance image; inputting the human body region image into a multi-attribute convolutional neural network model, to obtain, for each of a plurality of human attributes in the human body region image, a probability that the human attribute corresponds to a respective predefined attribute value, the multi-attribute convolutional neural network model being obtained by performing multi-attribute recognition and training on a set of pre-obtained training images by using a multi-attribute convolutional neural network; determining, for each of the plurality of human attributes in the human body region image, the attribute value of the human attribute based on the corresponding probability; and displaying the attribute values of the plurality of human attributes next to the human body region image.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: March 15, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Siqian Yang, Jilin Li, Yongjian Wu, Yichao Yan, Keke He, Yanhao Ge, Feiyue Huang, Chengjie Wang
  • Patent number: 11276162
    Abstract: Embodiments include a computer implemented method of processing images of material surfaces to identify defects on the imaged material surface, the method including training a neural network to generate reduced-defect versions of training images of material surfaces; acquiring an image of a subject material surface; inputting the acquired image to the neural network to generate a reduced-defect version of the acquired image; and comparing the reduced-defect version of the acquired image with the acquired image to identify differences. Defects on the subject material surface at locations of the identified differences are identifiable.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 15, 2022
    Assignee: FUJITSU LIIMITED
    Inventor: Thomas Chaton
  • Patent number: 11275931
    Abstract: A human pose prediction method is provided for an electronic device. The method includes using a basic neural network based on image-feature-based prediction to perform prediction on an inputted target image, to obtain an initial prediction map of a human key-point; inputting the initial prediction map of the human key-point and a human structure diagram into a pose graph neural network based on spatial information mining, each node in the human structure diagram corresponding to a human joint respectively, and each edge connecting adjacent human joints; using the pose graph neural network to initialize the human structure diagram by using the initial prediction map of the human key-point, to obtain an initialized human structure diagram; and using the pose graph neural network to perform iterative prediction on the initialized human structure diagram, to obtain a final prediction map, the final prediction map indicating a predicted human pose.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: March 15, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Hong Zhang, Xiaoyong Shen, Jiaya Jia
  • Patent number: 11270121
    Abstract: The technology described herein is directed to a media indexer framework including a character recognition engine that automatically detects and groups instances (or occurrences) of characters in a multi-frame animated media file. More specifically, the character recognition engine automatically detects and groups the instances (or occurrences) of the characters in the multi-frame animated media file such that each group contains images associated with a single character. The character groups are then labeled and used to train an image classification model. Once trained, the image classification model can be applied to subsequent multi-frame animated media files to automatically classifying the animated characters included therein.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: March 8, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oron Nir, Maria Zontak, Tucker Cunningham Burns, Apar Singhal, Lei Zhang, Irit Ofer, Avner Levi, Haim Sabo, Ika Bar-Menachem, Eylon Ami, Ella Ben Tov
  • Patent number: 11263495
    Abstract: A device and computer implemented method for digital image content recognition. The method includes determining, depending on a digital image, a first candidate class for the content of the digital image by a baseline model neural network comprising a first feature extractor and a first classifier for classifying digital images; determining a second candidate class for the content of the digital image by a prototypical neural network comprising a second feature extractor and a second classifier for classifying digital images, classifying the content of the digital image into either the first candidate class or the second candidate class depending on the result of a comparison of a first confidence score for the first candidate class to a threshold or of a comparison of a first confidence score for the first candidate class to a second confidence score for the second candidate class.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: March 1, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Nicole Ying Finnie, Benedikt Sebastian Staffler
  • Patent number: 11256922
    Abstract: A semantic representation method and system based on an aerial surveillance video, and an electronic device are provided. The semantic representation method includes: taking a pedestrian and a vehicle in the aerial surveillance video as a target for tracking; inputting a coordinate track of the target into a first semantic classifier to output a first semantic result of the target; performing semantic merging on the first semantic result, and inputting an obtained semantic merging result into a second semantic classifier to output a second semantic result of the target; and performing cluster analysis on the first semantic result to obtain a target group of the target, and according to the target group of the target, the obtained scene analysis result and the second semantic result, determining semantics of the aerial surveillance video.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: February 22, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Xiaoming Tao, Yiping Duan, Ziqi Zhao, Danlan Huang, Ning Ge