Patents Examined by Xiao Liu
  • Patent number: 12293593
    Abstract: An object detection method is provided. The method includes: acquiring a scene image of a scene; acquiring a three-dimensional point cloud corresponding to the scene; segmenting the scene image into a plurality of sub-regions; merging the plurality of sub-regions according to the three-dimensional point cloud to generate a plurality of region proposals; and performing object detection on the plurality of region proposals to determine a target object to be detected in the scene image. In addition, an object detection device, a terminal device, and a medium are also provided.
    Type: Grant
    Filed: June 9, 2022
    Date of Patent: May 6, 2025
    Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.
    Inventor: Yi Xu
  • Patent number: 12260596
    Abstract: A computer implemented method for reducing an amount of memory required to store hyperspectral images of an object include: obtaining tensor data representing a hyperspectral image including a first portion depicting an object and a second portion depicting at least a portion of a surrounding environment where the object is located; identifying a portion of the tensor data representing the hyperspectral image that corresponds to the first portion; providing the identified portion of the tensor data representing the hyperspectral image as an input to a feature extraction model; obtaining one or more matrix structures as output by the feature extraction model based on the feature extraction model processing the identified portion of the tensor data, the one or more matrix structures representing a subset of features extracted from the identified portion of the tensor data; and storing the one or more matrix structures in a memory device.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: March 25, 2025
    Assignee: Apeel Technology, Inc.
    Inventor: Richard Pattison
  • Patent number: 12260329
    Abstract: The present technology is directed to identifying neutrophil extracellular traps (NETs) in blood. For example, the present technology provides artificial intelligence systems, architectures, and/or programs that can rapidly and/or automatically identify and/or enumerate NETs in peripheral blood smears, CBC scattergrams, and the like. The artificial intelligence architectures can be integrated into current automated imaging and/or analysis systems (e.g., automated imaging systems for performing cell blood counts (CBC)). The artificial intelligence architectures can also be integrated into another computing device, such as a mobile device.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: March 25, 2025
    Assignee: MONTEFIORE MEDICAL CENTER
    Inventors: Morayma Reyes Gil, Kenji Ikemura, Mohammad Barouqa, Henny Billett, Margarita Kushnir
  • Patent number: 12260565
    Abstract: An image processing-based foreign substance detection method in a wireless charging system and a device performing the method are disclosed. A method for detecting a foreign substance includes an operation of acquiring an image of a charging area of a wireless charging system, an operation of detecting, based on an RGB value of a frame of the image, a foreign substance in the charging area, an operation of discriminating a type of the foreign substance, and an operation of performing power control of the wireless charging system according to the type of the foreign substance.
    Type: Grant
    Filed: March 18, 2022
    Date of Patent: March 25, 2025
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Gwangzeen Ko, Jung Ick Moon, Sang-Won Kim, Seong-Min Kim, In Kui Cho, Woo Jin Byun, Je Hoon Yun
  • Patent number: 12243241
    Abstract: A method for tracking and/or characterizing multiple objects in a sequence of images. The method includes: assigning a neural network to each object to be tracked; providing a memory that is shared by all neural networks; providing a local memory for each neural network, respectively; supplying images from the sequence, and/or details of these images, to each neural network; during the processing of each image and/or image detail by one of the neural networks, generating an address vector from at least one processing product of this neural network; based on this address vector, writing at least one further processing product of the neural network into the shared memory and/or into the local memory, and/or reading out data from this shared memory and/or local memory and further processing the data by the neural network.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: March 4, 2025
    Assignee: ROBERT BOSCH GMBH
    Inventor: Cosmin Ionut Bercea
  • Patent number: 12243284
    Abstract: This application relates to an image recognition technology in the field of computer vision in the field of artificial intelligence, and provides an image classification method and apparatus. The method includes: obtaining an input feature map of a to-be-processed image; performing convolution processing on the input feature map based on M convolution kernels of a neural network, to obtain a candidate output feature map of M channels, where M is a positive integer; performing matrix transformation on the M channels of the candidate output feature map based on N matrices, to obtain an output feature map of N channels, where a quantity of channels of each of the N matrices is less than M, N is greater than M, and N is a positive integer; and classify the to-be-processed image based on the output feature map, to obtain a classification result of the to-be-processed image.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: March 4, 2025
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Kai Han, Yunhe Wang, Han Shu, Chunjing Xu
  • Patent number: 12236640
    Abstract: Systems and methods for image dense field based view calibration are provided. In one embodiment, an input image is applied to a dense field machine learning model that generates a vertical vector dense field (VVF) and a latitude dense field (LDF) from the input image. The VVF comprises a vertical vector of a projected vanishing point direction for each of the pixels of the input image. The latitude dense field (LDF) comprises a projected latitude value for the pixels of the input image. A dense field map for the input image comprising the VVF and the LDF can be directly or indirectly used for a variety of image processing manipulations. The VVF and LDF can be optionally used to derive traditional camera calibration parameters from uncontrolled images that have undergone undocumented or unknown manipulations.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: February 25, 2025
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Linyi Jin, Kevin Matzen, Oliver Wang, Yannick Hold-Geoffroy
  • Patent number: 12229805
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for processing an image using visual and textual information. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to detect regions of interest corresponding to a product promotion of an input digital leaflet, extract textual features from the product promotion by applying an optical character recognition (OCR) algorithm to the product promotion and associating output text data with corresponding ones of the regions of interest, determine a search attribute corresponding to the product promotion, generate a first dataset of candidate products corresponding to the product in the product promotion by comparing the search attribute against a second dataset of products, and select a product from the first dataset of candidate products to associate with the product promotion, the product selected based on a match determination.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: February 18, 2025
    Assignee: Nielsen Consumer LLC
    Inventors: Javier Martínez Cebrián, Roberto Arroyo, David Jiménez
  • Patent number: 12211199
    Abstract: A semiconductor inspection method by an observation system includes a step of acquiring a first pattern image showing a pattern of a semiconductor device, a step of acquiring a second pattern image showing a pattern of the semiconductor device and having a different resolution from a resolution of the first pattern image, a step of learning a reconstruction process of the second pattern image using the first pattern image as training data by machine learning, and reconstructing the second pattern image into a reconstructed image having a different resolution from a resolution of the second pattern image by the reconstruction process based on a result of the learning, and a step of performing alignment based on a region calculated to have a high degree of certainty by the reconstruction process in the reconstructed image and the first pattern image.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: January 28, 2025
    Assignee: HAMAMATSU PHOTONICS K.K.
    Inventors: Tomochika Takeshima, Takafumi Higuchi, Kazuhiro Hotta
  • Patent number: 12198398
    Abstract: Methods and systems are disclosed for performing operations for transferring motion from one real-world object to another in real-time. The operations comprise receiving a first video that includes a depiction of a first real-world object and extracting an appearance of the first real-world object from the video. The operations comprise obtaining a second video that includes a depiction of a second real-world object and extracting motion of the second real-world object from the second video. The operations comprise applying the motion of the second real-world object extracted from the second video to the appearance of the first real-world object extracted from the first video. The operations comprise generating a third video that includes a depiction of the first real-world object having the appearance of the first real-world object and the motion of the second real-world object.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: January 14, 2025
    Assignee: Snap Inc.
    Inventors: Avihay Assouline, Itamar Berger, Nir Malbin, Gal Sasson
  • Patent number: 12190549
    Abstract: There is provided an information processing device, an image processing device, an encoding device, a decoding device, an electronic apparatus, an information processing method, or a program for processing attribute information of each point of a point cloud representing an object in a three-dimensional shape as a set of points, a first level is hierarchized using a first hierarchization method and a second level different from the first level is hierarchized using a second hierarchization method different from the first hierarchization method at the time of performing hierarchization of attribute information by recursively repeating processing of, among points classified as a predictive point or a reference point, deriving a difference value between a predictive value of attribute information of the predictive point derived using attribute information of the reference point and the attribute information of the predictive point for the reference point to perform hierarchization of the attribute information.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: January 7, 2025
    Assignee: SONY GROUP CORPORATION
    Inventors: Satoru Kuma, Ohji Nakagami, Koji Yano, Hiroyuki Yasuda, Tsuyoshi Kato
  • Patent number: 12183102
    Abstract: Optical character recognition (OCR) based systems and methods for extracting and automatically evaluating contextual and identification information and associated metadata from an image utilizing enhanced image processing techniques and image segmentation. A unique, comprehensive integration with an account provider system and other third party systems may be utilized to automate the execution of an action associated with an online account. The system may evaluate text extracted from a captured image utilizing machine learning processing to classify an image type for the captured image, and select an optical character recognition model based on the classified image type. They system may compare a data value extracted from the recognized text for a particular data type with an associated online account data value for the particular data type to evaluate whether to automatically execute an action associated with the online account linked to the image based on the data value comparison.
    Type: Grant
    Filed: July 22, 2022
    Date of Patent: December 31, 2024
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Ian Whitestone, Brian Chun-Lai So, Sourabh Mittal
  • Patent number: 12171528
    Abstract: A blood vessel wall thickness estimation method includes: obtaining behavioral information, which is numerical information about changes over time in positions of a plurality of predetermined points in a blood vessel wall, based on a video including the blood vessel wall obtained using four-dimensional angiography; generating estimation information for estimating a thickness of the blood vessel wall based on the behavioral information obtained in the obtaining; and outputting the estimation information generated in the generating. The estimation information is information in which at least one of the following is visualized: a change in displacement over time; a change in speed over time; a change in acceleration over time; a change in kinetic energy over time; a spring constant obtained from the displacement and the acceleration, and a Fourier coefficient obtained from the change in the displacement over time.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: December 24, 2024
    Assignee: OSAKA UNIVERSITY
    Inventor: Yoshie Sugiyama
  • Patent number: 12175723
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for unsupervised learning of object keypoint locations in images. In particular, a keypoint extraction machine learning model having a plurality of keypoint model parameters is trained to receive an input image and to process the input image in accordance with the keypoint model parameters to generate a plurality of keypoint locations in the input image. The machine learning model is trained using either temporal transport or spatio-temporal transport.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: December 24, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Ankush Gupta, Tejas Dattatraya Kulkarni
  • Patent number: 12175764
    Abstract: Techniques for performing deconvolution operations on data structures representing condensed sensor data are disclosed herein. Autonomous vehicle sensors can capture data in an environment that may include one or more objects. The sensor data may be processed by a convolutional neural network to generate condensed sensor data. The condensed sensor data may be processed by one or more deconvolution layers using a machine-learned upsampling transformation to generate an output data structure for improved object detection, classification, and/or other processing operations.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: December 24, 2024
    Assignee: Zoox, Inc.
    Inventors: Qian Song, Benjamin Isaac Zwiebel
  • Patent number: 12169970
    Abstract: A system monitors moveable objects using sensor data captured using one or more sensor mounted on a location of the moveable object. The system uses a machine learning based model to predict a risk score indicating a degree of risk associated with the moveable object. The system determines the action to be taken to mitigate the risk based on the risk score. The system transmits information describing the moveable object based on the sensor data to a remote monitoring system. The system may determine the amount of information transmitted, the rate at which information is transmitted, and the type of information displayed based on the risk score. The system performs dignity preserving transformations of the sensor data before transmitting or storing the data.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 17, 2024
    Inventor: Lily Vittayarukskul
  • Patent number: 12166995
    Abstract: An image encoding method that generates and encodes a gram matrix representing an image feature when encoding an image to be encoded includes a feature map generation step of generating a plurality of feature maps from the image to be encoded; a gram matrix generation step of generating a gram matrix through calculations between/among the feature maps; a representative vector determination step of generating a representative vector and a representative coefficient value by singular value decomposition of the gram matrix; and a vector encoding step of encoding the representative coefficient value and the representative vector.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: December 10, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shiori Sugimoto, Seishi Takamura, Atsushi Shimizu
  • Patent number: 12165287
    Abstract: Techniques for generating a synthetic computed tomography (sCT) image from a cone-beam computed tomography (CBCT) image are provided. The techniques include receiving a CBCT image of a subject; generating, using a generative model, a sCT image corresponding to the CBCT image, the generative model trained based on one or more deformable offset layers in a generative adversarial network (GAN) to process the CBCT image as an input and provide the sCT image as an output; and generating a display of the sCT image for medical analysis of the subject.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: December 10, 2024
    Assignee: Elekta, Inc.
    Inventor: Jiaofeng Xu
  • Patent number: 12165381
    Abstract: An apparatus includes an acquisition unit to acquires normal information indicating a normal direction on a surface of an object and specular reflection information regarding reflection on the object in a specular reflection direction, and a compression unit to compress the normal information based on the specular reflection information and to perform a higher compression process for a lower specular reflection intensity on the object.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: December 10, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Midori Inaba
  • Patent number: 12154293
    Abstract: In various examples, live perception from wide-view sensors may be leveraged to detect features in an environment of a vehicle. Sensor data generated by the sensors may be adjusted to represent a virtual field of view different from an actual field of view of the sensor, and the sensor data—with or without virtual adjustment—may be applied to a stereographic projection algorithm to generate a projected image. The projected image may then be applied to a machine learning model—such as a deep neural network (DNN)—to detect and/or classify features or objects represented therein. In some examples, the machine learning model may be pre-trained on training sensor data generated by a sensor having a field of view less than the wide-view sensor such that the virtual adjustment and/or projection algorithm may update the sensor data to be suitable for accurate processing by the pre-trained machine learning model.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: November 26, 2024
    Assignee: NVIDIA Corporation
    Inventor: Karsten Patzwaldt