Learning Systems Patents (Class 382/155)
  • Patent number: 11961219
    Abstract: Methods and systems for generating a simulated image for a specimen are provided. One system includes one or more computer subsystems and one or more components executed by the one or more computer subsystems. The one or more components include a generative adversarial network (GAN), e.g., a conditional GAN (cGAN), trained with a training set that includes portions of design data for one or more specimens designated as training inputs and corresponding images of the one or more specimens designated as training outputs. The one or more computer subsystems are configured for generating a simulated image for a specimen by inputting a portion of design data for the specimen into the GAN.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: April 16, 2024
    Assignee: KLA Corp.
    Inventor: Bjorn Brauer
  • Patent number: 11941867
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a classification neural network.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: March 26, 2024
    Assignee: Google LLC
    Inventors: Geoffrey E. Hinton, Nicholas Myles Wisener Frosst, Nicolas Guy Robert Papernot
  • Patent number: 11941805
    Abstract: The present disclosure relates to systems and methods for image processing. The methods may include obtaining imaging data of a subject, generating a first image based on the imaging data, and generating at least two intermediate images based on the first image. At least one of the at least two intermediate images may be generated based on a machine learning model. And the at least two intermediate images may include a first intermediate image and a second intermediate image. The first intermediate image may include feature information of the first image, and the second intermediate image may have lower noise than the first image. The methods may further include generating, based on the first intermediate image and at least one of the first image or the second intermediate image, a target image of the subject.
    Type: Grant
    Filed: July 17, 2021
    Date of Patent: March 26, 2024
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventor: Yang Lyu
  • Patent number: 11935507
    Abstract: Apparatus, methods, and systems that operate to provide interactive streaming content identification and processing are disclosed. An example apparatus includes a classifier to determine an audio characteristic value representative of an audio characteristic in audio; a transition detector to detect a transition between a first category and a second category by comparing the audio characteristic value to a threshold value among a set of threshold values, the set of threshold values corresponding to the first category and the second category; and a context manager to control a device to switch from a first fingerprinting algorithm to a second fingerprinting algorithm different than the first fingerprinting algorithm, responsive to the detected transition between the first category and the second category.
    Type: Grant
    Filed: August 15, 2022
    Date of Patent: March 19, 2024
    Assignee: GRACENOTE, INC.
    Inventors: Michael Jeffrey, Markus K. Cremer, Dong-In Lee
  • Patent number: 11928589
    Abstract: Disclosed herein is an image preprocessing/analysis apparatus using machine learning-based artificial intelligence. The image preprocessing apparatus includes a computing system, and the computing system includes: a processor; a communication interface configured to receive an input image; and an artificial neural network configured to generate first and second preprocessing conditions through inference on the input image. The processor includes a first preprocessing module configured to generate a first preprocessed image and a second preprocessing module configured to generate a second preprocessed image.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: March 12, 2024
    Assignee: Korea Institute of Science and Technology
    Inventors: Kihwan Choi, Jangho Kwon, Laehyun Kim
  • Patent number: 11915465
    Abstract: A method for converting a lineless table into a lined table includes associating a first set of tables with a second set of tables to form a set of multiple table pairs that includes tables with lines and tables without lines. A conditional generative adversarial network (cGAN) is trained, using the table pairs, to produce a trained cGAN. Using the trained cGAN, lines are identified for overlaying onto a lineless table. The lines are overlaid onto the lineless table to produce a lined table.
    Type: Grant
    Filed: August 21, 2019
    Date of Patent: February 27, 2024
    Inventors: Mehrdad Jabbarzadeh Gangeh, Hamid Reza Motahari-Nezad
  • Patent number: 11907668
    Abstract: The present disclosure provides a method for selecting an annotated sample. The method includes: determining a first attribute and a second attribute of a sample characteristic; in which the first attribute is a characteristic attribute of the sample characteristic in a source field sample set, and the second attribute is a characteristic attribute of the sample characteristic in a target field sample set; and determining a target annotated sample from a plurality of candidate annotated samples of the source field sample set according to the first attribute and the second attribute; in which the target annotated sample is configured to train a classification model, the classification model includes a model for determining an emotion polarity by analyzing an input sample to be classified.
    Type: Grant
    Filed: December 30, 2022
    Date of Patent: February 20, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Minlong Peng, Mingming Sun, Ping Li
  • Patent number: 11893486
    Abstract: In one embodiment, a method includes by a computing device, detecting a sensory input, identifying, using a machine-learning model, one or more attributes associated with the machine-learning model, wherein the attributes are identified based on the sensory input in accordance with the model's training, and presenting the attributes as output. The identifying may be performed at least in part by an inference engine that interacts with the model. The sensory input may include an input image received from a camera, and the model may identify the attributes based on an input object in the input image in accordance with the model's training. The model may include a convolutional neural network trained using training data that associates training sensory input with the attributes. The training sensory input may include a training image of a training object, and the input object may be classified in the same class as the training object.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: February 6, 2024
    Assignee: Apple Inc.
    Inventor: Peter Zatloukal
  • Patent number: 11887270
    Abstract: The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: January 30, 2024
    Assignee: Google LLC
    Inventors: Junjie Ke, Feng Yang, Qifei Wang, Yilin Wang, Peyman Milanfar
  • Patent number: 11880747
    Abstract: An image recognition method, a training system for an object recognition model and a training method for an object recognition model are provided. The image recognition method includes the following steps. At least one original sample image of an object in a field and an object range information and an object type information in the original sample image are obtained. At least one physical parameter is adjusted to generate plural simulated sample images of the object. The object range information and the object type information of the object in each of the simulated sample images are automatically marked. A machine learning procedure is performed to train an object recognition model. An image recognition procedure is performed on an input image.
    Type: Grant
    Filed: December 27, 2019
    Date of Patent: January 23, 2024
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Hsin-Cheng Lin, Sen-Yih Chou
  • Patent number: 11878433
    Abstract: A method for detecting a grasping position of a robot in grasping a target object includes: collecting a target RGB image and a target Depth image of the target object at different view angles; inputting each of the target RGB image to a target object segmentation network for calculation to obtain an RGB pixel region of the target object in the target RGB image and a Depth pixel region of the target object; inputting the RGB pixel region to an optimal grasping position generation network to obtain an optimal grasping position for grasping the target object; inputting the Depth pixel region of the target object and the optimal grasping position to a grasping position quality evaluation network to calculate a score of the optimal grasping position; and selecting an optimal grasping position corresponding to a highest score as a global optimal grasping position of the robot.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: January 23, 2024
    Assignee: CLOUDMINDS ROBOTICS CO., LTD.
    Inventors: Guoguang Du, Kai Wang, Shiguo Lian
  • Patent number: 11854225
    Abstract: A method for determining a localization pose of an at least partially automated mobile platform, the mobile platform being equipped to generate ground images of an area surrounding the mobile platform, and being equipped to receive aerial images of the area surrounding the mobile platform from an aerial-image system. The method includes: providing a digital ground image of the area surrounding the mobile platform; receiving an aerial image of the area surrounding the mobile platform; generating the localization pose of the mobile platform with the aid of a trained convolutional neural network, which has a first trained encoder convolutional-neural-network part and a second trained encoder convolutional-neural-network part.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: December 26, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Carsten Hasberg, Piyapat Saranrittichai, Tayyab Naseer
  • Patent number: 11842278
    Abstract: An example system includes a processor to receive an image containing an object to be detected. The processor is to detect the object in the image via a binary object detector trained via a self-supervised training on raw and unlabeled videos.
    Type: Grant
    Filed: January 26, 2023
    Date of Patent: December 12, 2023
    Assignee: International Business Machines Corporation
    Inventors: Elad Amrani, Tal Hakim, Rami Ben-Ari, Udi Barzelay
  • Patent number: 11829443
    Abstract: Disclosed are techniques for augmenting video datasets for training machine learning algorithms with additional video datasets that are cropped copies of the video datasets. Frames of a received video dataset are divided into a plurality of subframes. For each subframe, a count is tallied corresponding to the cumulative number of pixels changed across the frames of the received video. Counts are compared to determine which subframe includes the most changed pixels across the frames of the video dataset, which is selected as a cropping candidate. The cropping candidate is used to generate copies of the video dataset that are cropped to include at least the cropping candidate and exclude at least some of the remaining portions of each frame of the video dataset that are outside of the cropping candidate. In some embodiments, boundaries of cropping candidates are transformed to generate a plurality of cropped variations of the video dataset.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: November 28, 2023
    Assignee: International Business Machines Corporation
    Inventors: Hiroki Kawaski, Shingo Nagai
  • Patent number: 11797864
    Abstract: Systems and methods for training a conditional generator model are described. Methods receive a sample, and determine a discriminator loss for the received sample. The discriminator loss is based on an ability to determine whether the sample is generated by the conditional generator model or is a ground truth sample. The method determines a secondary loss for the generated sample and updates the conditional generator model based on an aggregate of the discriminator loss and the secondary loss.
    Type: Grant
    Filed: November 16, 2018
    Date of Patent: October 24, 2023
    Inventors: Shabab Bazrafkan, Peter Corcoran
  • Patent number: 11797603
    Abstract: Techniques are disclosed for using and training a descriptor network. An image may be received and provided to the descriptor network. The descriptor network may generate an image descriptor based on the image. The image descriptor may include a set of elements distributed between a major vector comprising a first subset of the set of elements and a minor vector comprising a second subset of the set of elements. The second subset of the set of elements may include more elements than the first subset of the set of elements. A hierarchical normalization may be imposed onto the image descriptor by normalizing the major vector to a major normalization amount and normalizing the minor vector to a minor normalization amount. The minor normalization amount may be less than the major normalization amount.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: October 24, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Koichi Sato
  • Patent number: 11797858
    Abstract: A method for training a generator. The generator is supplied with at least one actual signal that includes real or simulated physical measured data from at least one observation of the first area. The actual signal is translated by the generator into a transformed signal that represents the associated synthetic measured data in a second area. Using a cost function, an assessment is made concerning to what extent the transformed signal is consistent with one or multiple setpoint signals, at least one setpoint signal being formed from real or simulated measured data of the second physical observation modality for the situation represented by the actual signal. Trainable parameters that characterize the behavior of the generator are optimized with the objective of obtaining transformed signals that are better assessed by the cost function. A method for operating the generator, and that encompasses the complete process chain are also provided.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: October 24, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Gor Hakobyan, Kilian Rambach, Jasmin Ebert
  • Patent number: 11775818
    Abstract: A training system for training a generator neural network arranged to transform measured sensor data into generated sensor data. The generator network is arranged to receive as input sensor data and a transformation goal selected from a plurality of transformation goals and is arranged to transform the sensor data according to the transformation goal.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: October 3, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Anna Khoreva, Dan Zhang
  • Patent number: 11775770
    Abstract: Systems described herein may use machine classifiers to perform a variety of natural language understanding tasks including, but not limited to multi-turn dialogue generation. Machine classifiers in accordance with aspects of the disclosure may model multi-turn dialogue as a one-to-many prediction task. The machine classifier may be trained using adversarial bootstrapping between a generator and a discriminator with multi-turn capabilities. The machine classifiers may be trained in both auto-regressive and traditional teacher-forcing modes, with the maximum likelihood loss of the auto-regressive outputs being weighted by the score from a metric-based discriminator model. The discriminators input may include a mixture of ground truth labels, the teacher-forcing outputs of the generator, and/or negative examples from the dataset. This mixture of input may allow for richer feedback on the autoregressive outputs of the generator.
    Type: Grant
    Filed: May 21, 2020
    Date of Patent: October 3, 2023
    Assignee: Capital One Services, LLC
    Inventors: Oluwatobi Olabiyi, Erik T. Mueller
  • Patent number: 11767028
    Abstract: This document describes change detection criteria for updating sensor-based maps. Based on an indication that a registered object is detected near a vehicle, a processor determines differences between features of the registered object and features of a sensor-based reference map. A machine-learned model is trained using self-supervised learning to identify change detections from inputs. This model is executed to determine whether the differences satisfy change detection criteria for updating the sensor-based reference map. If the change detection criteria is satisfied, the processor causes the sensor-based reference map to be updated to reduce the differences, which enables the vehicle to safely operate in an autonomous mode using the updated reference map for navigating the vehicle in proximity to the coordinate location of the registered object.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: September 26, 2023
    Assignee: Aptiv Technologies Limited
    Inventors: Kai Zhang, Walter K. Kosiak
  • Patent number: 11748975
    Abstract: The present disclosure discloses a method and device for optimizing an object-class model based on a neural network. The method includes: establishing the object-class model based on the neural network, training the object-class model, and realizing classification of target images by using the object-class model that has been trained; and when a new target image is generated, and the new target image is an image corresponding to a new condition of a target and is capable of still being classified into an original classification system, judging a result of identification of the object-class model to the new target image, and if the object-class model is not capable of correctly classifying the new target image, according to the new target image, selecting some of parameters, adjusting the some of parameters, and training to obtain an object-class model that is capable of correctly classifying the new target image.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: September 5, 2023
    Assignee: GOERTEK INC.
    Inventors: Shunran Di, Yifan Zhang, Jie Liu, Jifeng Tian
  • Patent number: 11741701
    Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: August 29, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar
  • Patent number: 11734810
    Abstract: A laser system for amplifying laser light generated from a laser light source and emitting the laser light includes an optical element in an optical path of the laser light and transmits the laser light, a control device to control power to be supplied to the laser system, an imager to capture an image of the optical element, and an image processing circuitry to process the image of the optical element captured by the imager. The image processing circuitry in which reference images of the optical element corresponding to power information relating to the power are prepared in advance includes a comparison unit to compare a captured image of the optical element captured by the imager with a reference image selected by a reference image selection unit, the reference image corresponding to the power information at a time of image capturing by the imager.
    Type: Grant
    Filed: July 15, 2020
    Date of Patent: August 22, 2023
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Takuya Kawashima, Sei Ebihara, Tatsuya Yamamoto, Masashi Naruse, Ken Hamachiyo
  • Patent number: 11715047
    Abstract: The image processing apparatus for performing display restriction processing on a captured image captured by a moving robot includes: a task acquisition unit configured to acquire information that corresponds to a property of a task to be executed via the remote operation performed on the moving robot; a target object identification unit configured to identify target objects in the captured image; a restricted target object specification unit configured to specify a target object for which a display restriction is required among the target objects identified in the target object identification unit in accordance with the property of the task to be executed by the moving robot based on the above information; and a display restriction processing unit configured to perform the display restriction processing on a restricted area in the captured image that corresponds to the target object for which display restriction is required.
    Type: Grant
    Filed: July 23, 2019
    Date of Patent: August 1, 2023
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Takuya Ikeda
  • Patent number: 11679506
    Abstract: One embodiment of the present invention sets forth a technique for generating simulated training data for a physical process. The technique includes receiving, as input to at least one machine learning model, a first simulated image of a first object, wherein the at least one machine learning model includes mappings between simulated images generated from models of physical objects and real-world images of the physical objects. The technique also includes performing, by the at least one machine learning model, one or more operations on the first simulated image to generate a first augmented image of the first object. The technique further includes transmitting the first augmented image to a training pipeline for an additional machine learning model that controls a behavior of the physical process.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: June 20, 2023
    Assignee: AUTODESK, INC.
    Inventors: Hui Li, Evan Patrick Atherton, Erin Bradner, Nicholas Cote, Heather Kerrick
  • Patent number: 11582400
    Abstract: A method of image processing based on a plurality of frames of images, an electronic device, and a storage medium are provided. The method includes: capturing a plurality of frames of original images; obtaining a high dynamic range (HDR) image by performing image synthesis on the plurality of frames of original images; performing artificial intelligent-based denoising on the HDR image to obtain a target denoised image.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: February 14, 2023
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Jiewen Huang
  • Patent number: 11580673
    Abstract: The subject matter described herein includes methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis. According to one method for mask embedding for realistic high-resolution image synthesis includes receiving, as input, a mask embedding vector and a latent features vector, wherein the mask embedding vector acts as a semantic constraint; generating, using a trained image synthesis algorithm and the input, a realistic image, wherein the realistic image is constrained by the mask embedding vector; and outputting, by the trained image synthesis algorithm, the realistic image to a display or a storage device.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: February 14, 2023
    Assignee: Duke University
    Inventors: Yinhao Ren, Joseph Yuan-Chieh Lo
  • Patent number: 11580743
    Abstract: A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: February 14, 2023
    Assignee: HONDA MOTOR CO., LTD.
    Inventors: Yi-Ting Chen, Behzad Dariush, Nakul Agarwal, Ming-Hsuan Yang
  • Patent number: 11541428
    Abstract: A system for categorizing seeds of plants into hybrid and non-hybrid categories. Seeds sorted according to the disclosed system are also disclosed.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: January 3, 2023
    Assignee: SeedX Technologies Inc.
    Inventors: Mordekhay Shniberg, Elad Carmon, Sarel Ashkenazy, David Gedalyaho Vaisberger, Sharon Ayal
  • Patent number: 11521043
    Abstract: An information processing method for embedding watermark bits into weights of a first neural network includes: obtaining an output of a second neural network by inputting a plurality of input values obtained from a plurality of weights of the first neural network to the second neural network; obtaining second gradients of the respective plurality of input values based on an error between the output of the second neural network and the watermark bits; and updating the weights based on values obtained by adding first gradients of the weights of the first neural network that have been obtained based on backpropagation and the respective second gradients.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: December 6, 2022
    Assignee: KDDI CORPORATION
    Inventors: Yusuke Uchida, Shigeyuki Sakazawa
  • Patent number: 11507826
    Abstract: A computer system uses Learning from Demonstration (LfD) techniques in which a multitude of tasks are demonstrated without requiring careful task set up, labeling, and engineering, and learns multiple modes of behavior from visual data, rather than averaging the multiple modes. As a result, the computer system may be used to control a robot or other system to exhibit the multiple modes of behavior in appropriate circumstances.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: November 22, 2022
    Assignee: Osaro
    Inventors: Khashayar Rohanimanesh, Aviv Tamar, Yinlam Chow
  • Patent number: 11501109
    Abstract: Methods and apparatus are disclosed for implementing machine learning data augmentation within the die of a non-volatile memory (NVM) apparatus using on-chip circuit components formed on or within the die. Some particular aspects relate to configuring under-the-array or next-to-the-array components of the die to generate augmented versions of images for use in training a Deep Learning Accelerator of an image recognition system by rotating, translating, skewing, cropping, etc., a set of initial training images obtained from a host device. Other aspects relate to configuring under-the-array or next-to-the-array components of the die to generate noise-augmented images by, for example, storing and then reading training images from worn regions of a NAND array to inject noise into the images.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: November 15, 2022
    Assignee: Western Digital Technologies, Inc.
    Inventors: Alexander Bazarsky, Ariel Navon
  • Patent number: 11478169
    Abstract: Action recognition methods are disclosed.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: October 25, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yu Qiao, Wenbin Du, Yali Wang, Lihui Jiang, Jianzhuang Liu
  • Patent number: 11481681
    Abstract: A system for training a classification model to be robust against perturbations of multiple perturbation types. A perturbation type defines a set of allowed perturbations. The classification model is trained by, in an outer iteration, selecting a set of training instances of a training dataset; selecting, among perturbations allowed by the multiple perturbation types, one or more perturbations for perturbing the selected training instances to maximize a loss function; and updating the set of parameters of the classification model to decrease the loss for the perturbed instances. A perturbation is determined by, in an inner iteration, determining updated perturbations allowed by respective perturbation types of the multiple perturbation types and selecting an updated perturbation that most increases the loss of the classification model.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: October 25, 2022
    Assignees: Robert Bosch GmbH, CARNEGIE MELLON UNIVERSITY
    Inventors: Eric Wong, Frank Schmidt, Jeremy Zieg Kolter, Pratyush Maini
  • Patent number: 11436443
    Abstract: A model testing system administers tests to machine learning (ML) models to test the accuracy and the robustness of the ML models. A user interface (UI) associated with the model testing system receives selections of one or more of a plurality of tests to be administered to a ML model under test. Test data produced by one or more of a plurality of testing ML models that correspond to the plurality of tests is provided to the ML model under test based on the selected tests. One or more of a generative patches test, a generative perturbations test and a counterfeit data test can be administered to the ML model under test based on the selections.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: September 6, 2022
    Assignee: ACCENTURE GLOBAT, SOLUTIONS LIMITED
    Inventors: Indrajit Kar, Shalini Agarwal, Vishal Pandey, Mohammed C. Salman, Sushresulagna Rath
  • Patent number: 11428535
    Abstract: A system, a method, and a computer program product for determining a sign type of a road sign are disclosed herein. The system comprises a memory configured to store computer-executable instructions and one or more processors configured to execute the instructions to obtain sensor data associated with the road sign, wherein the sensor data comprises data associated with counts of road sign observations, determine one or more features associated with the road sign, based on the obtained sensor data, and determine the sign type of the road sign, based on the one or more features.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: August 30, 2022
    Assignee: HERE GLOBAL B.V.
    Inventors: Advait Mohan Raut, Leon Stenneth, Bruce Bernhardt
  • Patent number: 11423598
    Abstract: A method for generating a synthetic image with predefined properties. The method includes the steps of providing first values which characterize the predefined properties of the image that is to be generated and attention weights which characterize a weighting of one of the first values and feeding sequentially the first values and assigned attention weights as input value pairs into an generative automated learning system that includes at least a recurrent connection. An image generation system and a computer program that are configured to carry out the method are also described.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: August 23, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Wenling Shang, Kihyuk Sohn
  • Patent number: 11413753
    Abstract: A control method includes: deriving an approach location at which the end effector grips an operation object; deriving a scan location for scanning an identifier of the operation object; and based on the approach location and the scan location, creating or deriving a control sequence to instruct the robot to execute the control sequence. The control sequence includes (1) gripping the operation object from a start location; (2) scanning an identifier of the operation object with a scanner located between the start location and a task location; (3) temporarily releasing the operation object from the end effector and regripping the operation object by the end effector to be shifted, at a shift location, when a predetermined condition is satisfied; and (4) moving the operation object to the task location.
    Type: Grant
    Filed: December 2, 2020
    Date of Patent: August 16, 2022
    Assignee: MUJIN, Inc.
    Inventors: Rosen Nikolaev Diankov, Yoshiki Kanemoto, Denys Kanunikov
  • Patent number: 11416707
    Abstract: An information processing method is executed by a computer, and includes: obtaining a first image generated by a multi-pinhole camera; extracting at least one point spread function (PSF) in each of a plurality of regions in the first image; obtaining a second image different from the first image, and reference data used in machine learning for the second image; generating a third image, by convolving each of a plurality of regions in the second image with at least one PSF extracted in a corresponding region of the plurality of regions in the first image; and outputting a pair of the reference data and the third image.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: August 16, 2022
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Satoshi Sato, Yasunori Ishii, Ryota Fujimura, Pongsak Lasang, Changxin Zhou
  • Patent number: 11392122
    Abstract: The technology relates to assisting large self-driving vehicles, such as cargo vehicles, as they maneuver towards and/or park at a destination facility. This may include a given vehicle transitioning between different autonomous driving modes. Such a vehicles may be permitted to drive in a fully autonomous mode on certain roadways for the majority of a trip, but may need to change to a partially autonomous mode on other roadways or when entering or leaving a destination facility such as a warehouse, depot or service center. Large vehicles such as cargo truck may have limited room to maneuver in and park at the destination, which may also prevent operation in a fully autonomous mode. Here, information from the destination facility and/or a remote assistance service can be employed to aid in real-time semi-autonomous maneuvering.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: July 19, 2022
    Assignee: Waymo LLC
    Inventors: Vijaysai Patnaik, William Grossman
  • Patent number: 11385901
    Abstract: A system including: at least one processor; and at least one memory having stored thereon computer program code that, when executed by the at least one processor, controls the system to: receive a data model identification and a dataset; in response to determining that the data model does not contain a hierarchical structure, perform expectation propagation on the dataset to approximate the data model with a hierarchical structure; divide the dataset into a plurality of channels; for each of the plurality of channels: divide the data into a plurality of microbatches; process each microbatch of the plurality of microbatches through parallel iterators; and process the output of the parallel iterators through single-instruction multiple-data (SIMD) layers; and asynchronously merge results of the SIMD layers.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: July 12, 2022
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Matthew van Adelsberg, Rohit Joshi, Siqi Wang
  • Patent number: 11373285
    Abstract: An image generation means 81 generates an image using a generator. A discrimination means 82 discriminates whether an object image includes a feature of a target image, using a discriminator. A first update means 83 updates the generator so as to minimize a first error representing a degree of divergence between a result of discriminating a generated image using the discriminator and a correct answer label associated with the generated image, the generated image being the image generated using the generator. A second update means 84 updates the discriminator so as to minimize a second error representing a degree of divergence between each of respective results of discriminating the generated image, a first actual image including the feature of the target image, and a second actual image not including the feature of the target image using the discriminator and a correct answer label associated with a corresponding image.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: June 28, 2022
    Assignee: NEC CORPORATION
    Inventors: Kyota Higa, Azusa Sawada
  • Patent number: 11354792
    Abstract: Technologies for image processing based on a creation workflow for creating a type of images are provided. Both multi-stage image generation as well as multi-stage image editing of an existing image are supported. To accomplish this, one system models the sequential creation stages of the creation workflow. In the backward direction, inference networks can backward transform an image into various intermediate stages. In the forward direction, generation networks can forward transform an earlier-stage image into a later-stage image based on stage-specific operations. Advantageously, this technical solution overcomes the limitations of the single-stage generation strategy with a multi-stage framework to model different types of variation at various creation stages. Resultantly, both novices and seasoned artists can use these technologies to efficiently perform complex artwork creation or editing tasks.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: June 7, 2022
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Hung-Yu Tseng, Yijun Li, Jingwan Lu
  • Patent number: 11341361
    Abstract: An analysis method executed by a computer includes acquiring a refine image that maximizes a score for inferring a correct label by an inferring process using a trained model, the refine image being generated from an input image used when an incorrect label is inferred; generating a map indicating a region of pixels having the same or similar level of attention degree related to inference in the inferring process, of a plurality of pixels in the generated refine image, based on a feature amount used in the inferring process; extracting an image corresponding to a pixel region whose level in the generated map is a predetermined level, from calculated images calculated based on the input image and the refine image; and generating an output image that specifies a portion related to an inference error in the inferring process, among the calculated images, based on image processing on the extracted image.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: May 24, 2022
    Assignee: FUJITSU LIMITED
    Inventors: Tomonori Kubota, Takanori Nakao, Yasuyuki Murata
  • Patent number: 11328184
    Abstract: Disclosed are an image classification and conversion method, apparatus, image processor and training method thereof, and medium. The image classification method includes receiving a first input image and a second input image; performing image encoding on the first input image by utilizing n stages of encoding units connected in cascades to produce a first output image, wherein n is an integer greater than 1, and wherein as for 1?i<n, the output of the i-th stage of encoding unit is an input of an (i+1)-th stage of encoding unit, wherein m is an integer greater than 1; outputting a first output image, the first output image comprising mn output sub-images, and each of the mn output sub-images is corresponding to an image category.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: May 10, 2022
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Pablo Navarrete Michelini, Hanwen Liu
  • Patent number: 11321529
    Abstract: A date extractor disclosed herein allows extracting dates and date ranges from documents. An implementation of the date extractor is implemented using various computer process instructions including scanning a document to generate a plurality of tokens, assigning labels to token using named entity recognition machine to generate a named entity vector, extracting dates from the named entity vector by comparing each of the named entities of the named entity vector to predetermined patterns of dates to generate a date vector, generating a plurality of date pairs from the date vector, and extracting date-ranges by comparing the plurality of date pairs to predetermined patterns of date ranges.
    Type: Grant
    Filed: December 25, 2018
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ying Wang, Min Li, Mengyan Lu
  • Patent number: 11308319
    Abstract: A technique making use of a few-shot model to determine graphical features present in an image based on a small set of examples with known graphical features. Where a support set including a number of images that each have a known combination of graphical features, the image recognition can identify unknown combinations of those graphical features in any number of query images. In an embodiment of the present disclosure examples of a filled-out form are used to interpret any number of additional filled out versions of the form.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: April 19, 2022
    Assignee: DST Technologies, Inc.
    Inventors: Hui Peng Hu, Ramesh Sridharan
  • Patent number: 11301754
    Abstract: An information processing device and method for sharing of compressed training data for neural network training is provided. The information processing device receives a first image which includes an object of interest. The information processing device extracts, from the received first image, a region of interest which includes the object of interest. Once extracted, the extracted region of interest is provided to an input layer of N numbers of layers of a first neural network, trained on an object detection task. The information processing device selects an intermediate layer of the first neural network and extracts a first intermediate result as an output generated by the selected intermediate layer of the first neural network based on the input RoI. Once extracted, the information processing device shares the extracted first intermediate result as compressed training data with a server to train a second neural network on the object detection task.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: April 12, 2022
    Assignee: SONY CORPORATION
    Inventor: Nikolaos Georgis
  • Patent number: 11295172
    Abstract: Method of detecting objects in non-perspective images starts by generating an arrangement of tiles based on a field of view of a non-perspective camera lens, a predetermined size of the tiles, and a predetermined maximum object radius. The arrangement of the tiles includes the minimum number of tiles to cover the field of view. A non-perspective image is then captured using the non-perspective camera lens. The non-perspective image may be a still image frame or a video. Using the tiles, a plurality of images are generated, respectively, and at least a portion of a first object is detected in one or more images. The first object is generated using the one or more images that include the at least the portion of the first object, and the first object is displayed on a display interface. Other embodiments are described herein.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: April 5, 2022
    Assignee: Snap Inc.
    Inventors: Kevin Xie Chen, Shree K. Nayar
  • Patent number: 11288883
    Abstract: A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: March 29, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Jeremy Ma, Kevin Stone, Max Bajracharya, Krishna Shankar