Patents Examined by Dustin Bilodeau
  • Patent number: 12387487
    Abstract: An apparatus includes a display control unit, a receiving unit, an adjusting unit, and a determination unit. The display control unit is configured to display an image showing a result of detection of a defect from a captured image of a structure on a display device. The receiving unit is configured to receive an operation to specify part of the displayed image as a first region and an operation to give an instruction to correct at least part of the detection data corresponding to the first region. The adjusting unit is configured to adjust a parameter to be applied to the first region according to the instruction. The determination unit is configured to determine one or more second regions to which the adjusted parameter is to be applied from a plurality of segmented regions of the image.
    Type: Grant
    Filed: February 9, 2022
    Date of Patent: August 12, 2025
    Assignee: Canon Kabushiki Kaisha
    Inventors: Kei Takayama, Yusuke Mitarai, Atsushi Nogami, Shoichi Hoshino
  • Patent number: 12380729
    Abstract: Computerized systems, and method and computer readable media. The method may include receiving, by a neural network, input face visual information; wherein the neural network comprises multiple convolutional layers, an embedding layer and one or more conversion layers; generating, by the embedding layer, a face recognition (FR) feature vector that comprises multiple FR feature elements; and generating a binary representation of the face recognition features based on the FR feature vector.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: August 5, 2025
    Assignee: CORSIGHT .AI
    Inventors: Ran Vardimon, Yarden Yaniv
  • Patent number: 12361714
    Abstract: An embodiment for detecting the danger of an object in close proximity of a subject is provided. The embodiment may include receiving real-time data from one or more IoT devices in a surrounding environment. The embodiment may also include detecting and classifying one or more subjects and one or more objects in an image from the one or more IoT devices. The embodiment may further include identifying one or more risk factors associated with each object. The embodiment may also include correlating the one or more risk factors associated with each object with the one or more subjects in the image. The embodiment may further include identifying relative positions of the one or more subjects and the one or more objects in the image. The embodiment may also include in response to determining a current position of at least one subject is dangerous, notifying a user of the danger.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: July 15, 2025
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nageswara Sastry Renduchintala, Hamid Majdabadi, Su Liu, Yang Liang
  • Patent number: 12361535
    Abstract: There is provided a system and method for mask inspection, comprising: obtaining a plurality of images, each representative of a respective part of the mask; generating a CD map of the mask comprising a plurality of composite values of a CD measurement of a POI respectively derived from the plurality of images, comprising, for each given image: dividing the given image into a plurality of sections; searching for the POI in the plurality of sections, giving rise to a set of sections, each with presence of at least one of the POI therein; for each section, obtaining a value of the CD measurement using a printing threshold, giving rise to a set of values of the CD measurement corresponding to the set of sections; and combining the set of values to a composite value of the CD measurement corresponding to the given image.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: July 15, 2025
    Assignee: Applied Materials Israel Ltd.
    Inventors: Ronen Madmon, Ariel Shkalim, Shani Ben Yacov
  • Patent number: 12354234
    Abstract: This disclosure relates generally to a method and system for multi-modal image super-resolution. Conventional methods for multi-modal image super-resolution are performed using joint image based filtering, deep learning and dictionary based approaches which require large datasets for training. Embodiments of the present disclosure provide a joint optimization based transform learning framework wherein a high-resolution (HR) image of target modality is reconstructed from a HR image of guidance modality and a low-resolution (LR) image of target modality. A set of parameters, transforms, coefficients and weight matrices are learnt jointly from a training data which includes a HR image of guidance modality, a LR image of target modality and a HR image of target modality. The learnt set of parameters are used for reconstructing a HR image of target modality. The disclosed joint optimization transform learning framework is used in remote sensing, environment monitoring and so on.
    Type: Grant
    Filed: May 26, 2022
    Date of Patent: July 8, 2025
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Andrew Gigie, Achanna Anil Kumar, Kriti Kumar, Mariswamy Girish Chandra, Angshul Majumdar
  • Patent number: 12347127
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. An estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin.
    Type: Grant
    Filed: May 10, 2021
    Date of Patent: July 1, 2025
    Inventor: Daniel Alejandro Moreno
  • Patent number: 12333711
    Abstract: Described herein are methods for generating and using a constrained ensemble of GANs. The constrained ensemble of GANs can be used to generate synthetic data that is (1) representative of the original data, and (2) not closely resembling the original data. An example method includes generating a constrained ensemble of GANs, where the constrained ensemble of GANs includes a plurality of ensemble members. The method also includes analyzing performance of the constrained ensemble of GANs by comparing a temporary performance metric to a baseline performance metric, and halting generation of the constrained ensemble of GANs in response to the analysis. The method also includes generating a synthetic dataset using the constrained ensemble of GANs. The synthetic dataset is sufficiently similar to the original dataset to permit data sharing for research purposes but alleviates privacy concerns due to differences in mutual information between synthetic and real data.
    Type: Grant
    Filed: August 13, 2021
    Date of Patent: June 17, 2025
    Assignee: Ohio State Innovation Foundation
    Inventors: Engin Dikici, Luciano Prevedello, Matthew Bigelow
  • Patent number: 12333712
    Abstract: A control device including: at least one processor, wherein the processor controls an image projection unit that projects a projection image onto a projection surface of a compression member in a mammography apparatus which irradiates a breast compressed by the compression member with radiation to capture a radiographic image such that a range of an irradiation field of the radiation is displayed by the projection image.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: June 17, 2025
    Assignee: FUJIFILM CORPORATION
    Inventors: Shinichiro Konno, Yoshie Fujimoto, Shunsuke Kodaira
  • Patent number: 12333793
    Abstract: A method for machine-based training of a computer-implemented network for common detecting, tracking, and classifying of at least one object in a video image sequence having a plurality of successive individual images. A combined error may be determined during the training, which error results from the errors of the determining of the class identification vector, determining of the at least one identification vector, the determining of the specific bounding box regression, and the determining of the inter-frame regression.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: June 17, 2025
    Assignee: OSRAM GMBH
    Inventors: Sikandar Amin, Bharti Munjal, Meltem Demirkus Brandlmaier, Abdul Rafey Aftab, Fabio Galasso
  • Patent number: 12327635
    Abstract: A computer-implemented method for the further training of an artificial-intelligence evaluation algorithm that has already been trained based upon basic training data, wherein the evaluation algorithm ascertains output data describing evaluation results from input data comprising image data recorded with a respective medical imaging facility. In an embodiment, the method includes ascertaining at least one additional training data set containing training input data and training output data assigned thereto; and training the evaluation algorithm using the at least one additional training data set. The additional training data set is ascertained from the input data used during a clinical examination process with a medical imaging facility, which the already-trained evaluation algorithm was used, and output data of the already-trained evaluation algorithm that has been at least partially correctively modified by the user.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: June 10, 2025
    Assignee: Siemens Healthineers AG
    Inventors: Frank Enders, Dorothea Roth, Michael Schrapp, Matthias Senn, Michael Suehling
  • Patent number: 12293562
    Abstract: In one aspect, an information processing apparatus includes a first acquisition module, a first extraction module, a first generation module, a second extraction module, a derivation module, and a first output control module. The first acquisition module acquires an input image to output a first image and a second image. The first extraction module extracts first characteristic point information from the first image. The first generation module generates a third image obtained by reducing a data amount of the second image. The second extraction module extracts second characteristic point information from the third image. The derivation module derives a difference between the first characteristic point information and the second characteristic point information. The first output control module outputs the third image corrected in accordance with the difference as an output image.
    Type: Grant
    Filed: August 25, 2021
    Date of Patent: May 6, 2025
    Assignee: SOCIONEXT INC.
    Inventor: Soichi Hagiwara
  • Patent number: 12288375
    Abstract: An apparatus is described which includes two or more colour displays arranged to provide piece-wise continuous illumination of a volume, and one or more cameras arranged to image the volume. The apparatus is configured to control the colour displays and the cameras to illuminate the volume with each of two or more illumination conditions. The apparatus is also configured to obtain two or more sets of images, which include sufficient information for calculation of a reflectance map and a photometric normal map of an object or subject positioned within the volume. Each set of images is obtained during illumination of the volume with one or more corresponding illumination conditions. When viewed from the volume, the apparatus only provides direct illumination of the volume from angles within a zone of a hemisphere, which is less than a hemisphere.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: April 29, 2025
    Assignee: Lumirithmic Limited
    Inventors: Abhijeet Ghosh, Gaurav Chawla, Yiming Lin, Jayanth Kannan, Ekin Ozturk
  • Patent number: 12272128
    Abstract: A method for processing image information of an imaging sensor of a vehicle in an artificial neural network (“ANN”) is disclosed. The ANN includes at least one encoder and one decoder. The ANN solves a classification task with a plurality of classes and/or a regression task, in which numerical output information quantized according to a plurality of quantization steps is provided. The ANN outputs multiple feature maps at the output interface, wherein allocations of image regions of the image information to classes or numerical output information quantized regarding the image information is/are output by the feature maps in an encoded manner.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: April 8, 2025
    Assignee: Conti Temic microelectronic GmbH
    Inventors: Armin Staudenmaier, Karl Matthias Nacken
  • Patent number: 12249080
    Abstract: The invention concerns a method (1) for detecting and tracking objects orbiting around the earth (for example space debris) by means of on-board processing of images acquired by a space platform (for example a satellite, a space vehicle or a space station) by one or more optical sensors, preferably one or more star trackers.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: March 11, 2025
    Assignee: Arca Dynamics Societa' A Responsabilita' Limitata Semplificata
    Inventors: Fabio Curti, Daniele Luchena, Marco Moriani, Dario Spiller
  • Patent number: 12243193
    Abstract: The disclosure relates generally to image processing. For example, the invention relates to a method and a device for de-noising an electron microscope (EM) image. The method includes the act of selecting a patch of the EM image, wherein the patch comprises a plurality of pixels, wherein the following acts are performed on the patch: i) replacing the value of one pixel, for example of a center pixel, of the patch with the value of a different, for example randomly selected, pixel from the same EM image; ii) determining a de-noised value for the one pixel based on the values of the other pixels in the patch; and iii) replacing the value of the one pixel with the determined de-noised value.
    Type: Grant
    Filed: July 2, 2021
    Date of Patent: March 4, 2025
    Assignee: IMEC VZW
    Inventors: Bappaditya Dey, Sandip Halder, Gouri Sankar Kar, Victor M. Blanco, Senthil Srinivasan Shanmugam Vadakupudhu Palayam
  • Patent number: 12236569
    Abstract: A method for welding a workpiece with a vision guided welding platform. The welding platform comprises a welding tool, and a camera for guiding the movement of the welding tool from a start point to an end point. The method includes the steps of adjusting a focal length of the camera such that a focal plane of the camera is located on a surface of the workpiece and obtaining a surface image of the workpiece. The method further includes the steps of determining a current focal length of the camera, determining a corrected pixel length of a pixel in the surface image and determining the number of pixels between the start point and the end point of each movement of the welding tool. Using the corrected pixel length, a distance between the start and end points is determined and the welding tool is guided to move therebetween.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: February 25, 2025
    Assignees: TE Connectivity Solutions GmbH, Tyco Electronics (Shanghai) Co., Ltd.
    Inventors: Zongjie (Jason) Tao, Dandan (Emily) Zhang, Roberto Francisco-Yi Lu
  • Patent number: 12229932
    Abstract: A method of generating a defect image for deep learning and a system therefor are provided. The method and the system are intended to be used in generating training data for an artificial intelligence algorithm. More specifically, the training data are defect images required to train an algorithm that identifies a defect from a product.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: February 18, 2025
    Assignee: DOOSAN ENERBILITY CO., LTD.
    Inventors: Jung Min Lee, Jung Moon Kim
  • Patent number: 12223017
    Abstract: A method may include capturing image data associated with an object in a defined environment at one or more points in time. The method may include capturing radar data associated with the object in the defined environment at the same points in time. The method may include obtaining, by a machine learning model, the image data and the radar data associated with the object in the defined environment. The method may include pairing each image datum with a corresponding radar datum based on a chronological occurrence of the image data and the radar data. The method may include generating, by the machine learning model, a three-dimensional motion representation associated with the object that is associated with the image data and the radar data.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: February 11, 2025
    Assignee: Rapsodo Pte. Ltd.
    Inventors: Batuhan Okur, Roshan Gopalakrishnan
  • Patent number: 12188845
    Abstract: A drive-through vehicle inspection system with a method for acquiring information from markings on tire sidewall surfaces of a moving vehicle. As the vehicle passes through the inspection system, sets of colored light sources, disposed at different relative orientations on opposite lateral sides of the vehicle, illuminate each passing wheel, enabling optical imaging systems associated with the opposite lateral sides of the inspection lane to acquire color images of the illuminated tire sidewall surfaces. Acquired color images are passed to a processing system and separated into individual red, green, and blue color channels for image processing. The processed output from each color channel is recombined by the processing system into a synthesized grayscale image highlighting and emphasizing markings present on the tire sidewall surfaces for evaluated by an OCR algorithm to retrieve tire identifying information.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: January 7, 2025
    Assignee: Hunter Engineering Company
    Inventor: David A. Voeller
  • Patent number: 12165329
    Abstract: A system and method for unsupervised superpixel-driven instance segmentation of a remote sensing image are provided. The remote sensing image is divided into one or more image patches. The one or more image patches are processed to generate one or more superpixel aggregation patches based on a graph-based aggregation model, respectively. The graph-based aggregation model is configured to learn at least one of a spatial affinity or a feature affinity of a plurality of superpixels from each image patch and aggregate the plurality of superpixels based on the at least one of the spatial affinity or the feature affinity of the plurality of superpixels. The one or more superpixel aggregation patches are combined into an instance segmentation image.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: December 10, 2024
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD
    Inventors: Zhicheng Yang, Hang Zhou, Jui-Hsin Lai, Mei Han