Patents Examined by Leon Flores
  • Patent number: 11410348
    Abstract: An imaging method. The method comprises the following steps: determining a target by identifying target-related position information or characteristic information (S101); implementing a two-dimensional scan of the target to collect image data of the target in a three-dimensional space (S102); processing, during the scanning, and on a real-time basis, the image data and relevant spatial information to obtain a plurality of image contents of the target, and displaying the image content on a real-time basis (S103); and arranging the plurality of image contents in an incremental sequence to form an image of the target (S104). The imaging method prevents collection of unusable image information, shortens image data collection time, and increases the speed of an imaging process. The application further provides an imaging device.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: August 9, 2022
    Assignee: Telefield Medical Imaging Limited
    Inventor: Yongping Zheng
  • Patent number: 11410035
    Abstract: Disclosed is a real-time object detection method deployed on a platform with limited computing resources, which belongs to the field of deep learning and image processing. In the present invention, YOLO-v3-tiny neural network is improved, Tinier-YOLO reserves the front five convolutional layers and pooling layers of YOLO-v3-tiny and makes prediction at two different scales. Fire modules in SqueezeNet, 1×1 bottleneck layers, and dense connection are introduced, so that the structure is used to achieve smaller, faster, and more lightweight network that can be run in real time on an embedded AI platform. The model size of Tinier-YOLO in the present invention is only 7.9 MB, which is only ¼ of 34.9 MB of YOLO-v3-tiny, and ? of YOLO-v2-tiny. The reduction in the model size of Tinier-YOLO does not affect real-time performance and accuracy of Tinier-YOLO. Real-time performance of Tinier-YOLO in the present invention is 21.8% higher than that of YOLO-v3-tiny and 70.8% higher than that of YOLO-v2-tiny.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: August 9, 2022
    Assignee: Jiangnan University
    Inventors: Wei Fang, Peiming Ren, Lin Wang, Jun Sun, Xiaojun Wu
  • Patent number: 11409989
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for model co-occurrence object detection. One of the methods includes accessing, for a training image, first data that indicates a detected bounding box for a first object depicted in the training image and a predicted type label, accessing, for the training image, ground truth data for one or more ground truth objects, determining, using the first data and the ground truth data, that i) the detected bounding box represents an object that is not a ground truth object represented by the ground truth data or ii) the predicted type label for the first object does not match a ground truth label for the first object identified by the ground truth data, determining a penalty to adjust the model using a distance between the detected bounding box and the labeled bounding box, and training the model using the penalty.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: August 9, 2022
    Assignee: ObjectVideo Labs, LLC
    Inventors: Sima Taheri, Gang Qian, Sung Chun Lee, Sravanthi Bondugula, Allison Beach
  • Patent number: 11393124
    Abstract: There is provided an information processing apparatus, an information processing method, and a program that are capable of easily predicting the posture of an object. An information processing apparatus according to an aspect of the present technology specifies, on the basis of learned data used in specifying corresponding points, obtained by performing learning using data of a predetermined portion that has symmetry with respect to other portions of an entire model that represents an object as a recognition target, second points on the model included in an input scene that correspond to first points on the model, as the corresponding points, and predicts the posture of the model included in the scene on the basis of the corresponding points. The present technology is applicable to an apparatus for controlling a projection system to project images according to projection mapping.
    Type: Grant
    Filed: February 20, 2019
    Date of Patent: July 19, 2022
    Assignee: SONY CORPORATION
    Inventor: Gaku Narita
  • Patent number: 11386662
    Abstract: The disclosure herein enables tracking of multiple objects in a real-time video stream. For each individual frame received from the video stream, a frame type of the frame is determined. Based on the individual frame being an object detection frame type, a set of object proposals is detected in the individual frame, associations between the set of object proposals and a set of object tracks are assigned, and statuses of the set of object tracks are updated based on the assigned associations. Based on the individual frame being an object tracking frame type, single-object tracking is performed on the frame based on each object track of the set of object tracks and the set of object tracks is updated based on the performed single-object tracking. For each frame received, a real-time object location data stream is provided based on the set of object tracks.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: July 12, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ishani Chakraborty, Yi-Ling Chen, Lu Yuan
  • Patent number: 11386288
    Abstract: A movement state recognition multitask DNN model training section 46 trains a parameter of a DNN model based on an image data time series and a sensor data time series, and based on first annotation data, second annotation data, and third annotation data generated for the image data time series and the sensor data time series. Training is performed such that a movement state recognized by the DNN model in a case in which input with the image data time series and the sensor data time series matches movement states indicated by the first annotation data, the second annotation data, and the third annotation data. This thereby enables information to be efficiently extracted and combined from both video data and sensor data, and also enables movement state recognition to be implemented with high precision for a data set including data that does not fall in any movement state class.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: July 12, 2022
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shuhei Yamamoto, Hiroyuki Toda
  • Patent number: 11379970
    Abstract: A method for training a deep learning model of a patterning process. The method includes obtaining (i) training data including an input image of at least a part of a substrate having a plurality of features and including a truth image, (ii) a set of classes, each class corresponding to a feature of the plurality of features of the substrate within the input image, and (iii) a deep learning model configured to receive the training data and the set of classes, generating a predicted image, by modeling and/or simulation with the deep learning model using the input image, assigning a class of the set of classes to a feature within the predicted image based on matching of the feature with a corresponding feature within the truth image, and generating, by modeling and/or simulation, a trained deep learning model by iteratively assigning weights using a loss function.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: July 5, 2022
    Assignee: ASML Netherlands B.V.
    Inventors: Adrianus Cornelis Matheus Koopman, Scott Anderson Middlebrooks, Antoine Gaston Marie Kiers, Mark John Maslow
  • Patent number: 11379697
    Abstract: An FPGA device receives an input matrix. A first convolutional kernel is determined by performing the exclusive nor operations between the input matrix and a first weight vector. A first binary kernel is determined based on the first convolutional kernel. A first layer feature map is determined by convoluting the input matrix using the first binary kernel. A second convolutional kernel is determined by performing the exclusive nor operations between the first feature map and the second weight vector. A pooled kernel is determined based on the second convolutional kernel. A second binary kernel is determined based on the pooled kernel. A second layer feature map is determined by convoluting the first layer feature map using the second binary kernel. A probability is determined that the input matrix is associated with a predetermined class of images. If the probability is greater than a threshold, classification results are provided.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: July 5, 2022
    Assignee: Bank of America Corporation
    Inventor: Madhusudhanan Krishnamoorthy
  • Patent number: 11380090
    Abstract: A method for object recognition at an interactive information system (IIS) includes capturing, using an imaging device of the IIS, a first image of a first representative object which represents a first one or more object disposed about the IIS; analyzing, by a computer processor of the IIS and based on a category model, the first image to determine a first representative category of the first one or more object; retrieving, by the computer processor and based on the first representative category, a first representative object model of a plurality of object models that are stored on a remote server; and analyzing, by the computer processor and based on the first representative object model, the first image to determine a first representative inventory identifier of the first representative object, which represents a first one or more inventory identifier corresponding to the first one or more object respectively.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: July 5, 2022
    Assignee: Flytech Technology Co., Ltd.
    Inventors: Tung-Ying Lee, Yi-Heng Tseng, Tzu-Wei Huang, Che-Wei Lin
  • Patent number: 11373426
    Abstract: A method for detecting key points of skeleton, an apparatus, an electronic device, and a storage medium are provided. The method is implemented as follows. An original image is acquired. The original image includes a plurality of key points of skeleton. Based on a pre-trained stacked hourglass network structure, skeleton key point identification is performed on the original image to obtain heat maps of the plurality of key points. The stacked hourglass network structure includes at least one hourglass network. The at least one hourglass network is configured to perform deep-layer feature learning on feature maps of the plurality of key points based on weight values corresponding to the feature maps.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: June 28, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Jili Gu, Lei Zhang, Wen Zheng
  • Patent number: 11373309
    Abstract: A method of facilitating image analysis in pathology involves receiving a sample image representing a sample for analysis, the sample image including sample image elements, causing one or more functions to be applied to the sample image to determine a plurality of property specific confidence related scores, each associated with a sample image element and a respective sample property and representing a level of confidence that the associated element represents the associated sample property, sorting a set of elements based at least in part on the confidence related scores, producing signals for causing one or more of the set of elements to be displayed to a user in an order based on the sorting, for each of the one or more elements displayed, receiving user input, and causing the user input to be used to update the one or more functions. Other methods, systems, and computer-readable media are disclosed.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: June 28, 2022
    Assignee: Aiforia Technologies Oyj
    Inventors: Juha Reunanen, Liisa-Maija Keinänen, Tuomas Ropponen
  • Patent number: 11366988
    Abstract: This disclosure relates to method and system for of dynamically annotating data or validating annotated data. The method may include receiving input data comprising a plurality of input data points. The method may further include one of: a) generating a plurality of annotations for each of the plurality of input data points using at least one of a state-label mapping model and a comparative ANN model, or b) receiving the plurality of annotations for each of the plurality of input data points from an external device or from a user, and validating the plurality of annotations using at least one of the state-label mapping model and the comparative artificial neural network (ANN) model.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: June 21, 2022
    Assignee: Wipro Limited
    Inventors: Ghulam Mohiuddin Khan, Deepanker Singh, Sethuraman Ulaganathan
  • Patent number: 11347974
    Abstract: A method for automating performance evaluation of a test object detection system includes providing at least one frame of image data to the test object detection system, processing the image data via an image processor of the test object detection system, and receiving, from the test object detection system, a list of objects detected by the test object detection system in the at least one frame of image data. The frame of image data is provided to a validation object detection system, and a list of objects detected by the validation object detection system is received from the validation object detection system. The list of objects detected by the test object detection system is compared to the list of objects detected by the validation object detection system and discrepancies are determined between the lists. The determined discrepancies between the lists of objects detected are reported.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: May 31, 2022
    Assignee: MAGNA ELECTRONICS INC.
    Inventors: Sai Sunil Charugundla Gangadhar, Navdeep Singh
  • Patent number: 11348349
    Abstract: A training data increment method, an electronic apparatus and a computer-readable medium are provided. The training data increment method is adapted for the electronic apparatus and includes the following steps. A training data set is obtained, wherein the training data set includes a first image and a second image. An incremental image is generated based on the first image and the second image. A deep learning model is trained based on the incremental image.
    Type: Grant
    Filed: August 18, 2020
    Date of Patent: May 31, 2022
    Assignee: Wistron Corporation
    Inventors: Zhe-Yu Lin, Chih-Yi Chien, Kuan-I Chung
  • Patent number: 11334772
    Abstract: A label feature extraction means 71 extracts, from reference information, a label feature that is a vector representing a feature of the reference information. A label feature dimension reduction means 72 performs dimension reduction of the label feature. An image feature extraction means 73 extracts an image feature from a target image that is an image in which an object to be recognized is captured. A feature transformation means 74 performs feature transformation on the image feature in such a manner that comparison with the label feature after the dimension reduction becomes possible. The class recognition means 75 recognizes a class of the object to be recognized by comparing the image feature after the feature transformation with the label feature after the dimension reduction.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: May 17, 2022
    Assignee: NEC CORPORATION
    Inventors: Takahiro Toizumi, Yuzo Senda
  • Patent number: 11334769
    Abstract: In an approach to augmenting caption datasets, one or more computer processors sample a ratio lambda from a probability distribution based on a pair of datapoints contained in a dataset, wherein each datapoint in the pair of datapoints comprises an image and an associated caption; extend the dataset by generating one or more new datapoints based on the sampled ratio lambda for each pair of datapoints in the dataset, wherein the sampled ratio lambda incorporates an interpolation of features associated with the pair of datapoints into the generated one or more new datapoints; identify one or more objects contained within a subsequent image utilizing an image model trained utilizing the extended dataset; generate a subsequent caption for one or more identified objects contained within the subsequent image utilizing a language generating model trained utilizing the extended dataset.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: Shiwan Zhao, Yi Ke Wu, Hao Kai Zhang, Zhong Su
  • Patent number: 11334978
    Abstract: A platform to accurately detect user pose/verify against a reference ground truth and provide feedback using an accuracy score that represents the deviation of the user pose from the reference ground truth, typically established by an expert.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: May 17, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Udupi Ramanath Bhat, Yasushi Okumura, Fabio Cappello
  • Patent number: 11328179
    Abstract: An information processing apparatus includes a processor to input each sample image into feature extracting components to obtain at least two features of the sample image, and to cause a classifying component to calculate a classification loss of the sample image based on the at least two features; extract, from each pair of features, a plurality of sample pairs for calculating mutual information between each pair of features; input the plurality of sample pairs into a machine learning architecture corresponding to each pair of features, to calculate an information loss between each pair of features.
    Type: Grant
    Filed: June 24, 2020
    Date of Patent: May 10, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Wei Shen, Rujie Liu
  • Patent number: 11321815
    Abstract: A method for generating a digital image pair for training a neural network to correct noisy image components of noisy images includes determining an extent of object movements within an overlapping region of a stored first digital image and a stored second digital image of an environment of a mobile platform, and determining a respective acquired solid angle of the environment of the mobile platform of the first and second digital images. The method further includes generating the digital image pair from the first digital image and the second digital images, when the respective acquired solid angles of the environment of the first and the second digital image do not differ from one another by more than a defined difference, and the extent of the object movements within the overlapping region of the first and the second digital image is less than a defined value.
    Type: Grant
    Filed: April 23, 2020
    Date of Patent: May 3, 2022
    Assignee: Robert Bosch GmbH
    Inventor: Martin Meinke
  • Patent number: 11308367
    Abstract: A classification learning section executes training of a feature quantity extraction section and training of a classification section resulting from a comparison between an output generated when feature quantity data is inputted to the classification section and training data regarding a plurality of classes associated with a source domain training image. A dividing section divides feature quantity data outputted from the feature quantity extraction section in accordance with input of an image into a plurality of pieces of partial feature quantity data corresponding to the image including a feature map of one or more of the classes. A domain identification learning section executes training of the feature quantity extraction section resulting from a comparison between an output generated when partial feature quantity data corresponding to the image is inputted to a domain identification section and data indicating whether the image belongs to a source domain or to the target domain.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 19, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Daichi Ono