Patents Examined by Daniel C Chang
  • Patent number: 12106446
    Abstract: An image forming system for constructing a whole large image of an object is provided. The image forming system includes an interface configured to sequentially acquire images of partial areas of the object obtained by a camera, wherein two adjacent images of the partial areas include overlap portions, wherein the sequential images correspond to a three dimensional (3D) surface of the object, wherein the geometrical information (geometrical parameters) of the object is provided, wherein an initial pose of the camera is provided.
    Type: Grant
    Filed: March 27, 2021
    Date of Patent: October 1, 2024
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Dehong Liu, Laixi Shi
  • Patent number: 12094035
    Abstract: The disclosure herein relates to methods and systems for localized smoke removal and color restoration of a real-time video. Conventional techniques apply the de-smoking process only on a single image, by finding the regions having the smoke, based on manual air-light estimation. In addition, regaining original colors of de-smoked image is quite challenging. The present disclosure herein solves the technical problems. In the first stage, video frames having the smoky and smoke-free video frames are identified, from the video received in the real-time. In the second stage, an air-light is estimated automatically using a combined feature map. An intermediate de-smoked video frame for each smoky video frame is generated based on the air-light using a de-smoking algorithm. In the third and the last stage, a smoke-free video reference frame is used to compensate for color distortions introduced by the de-smoking algorithm in the second stage.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: September 17, 2024
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Jayavardhana Rama Gubbi Lakshminarasimha, Karthik Seemakurthy, Vartika Sengar, Aparna Kanakatte Gurumurthy, Avik Ghose, Balamuralidhar Purushothaman, Murali Poduval, Jayeeta Saha, Srinivasan Jayaraman, Vivek Bangalore Sampathkumar
  • Patent number: 12084157
    Abstract: A tracking system for tracking water-surface objects includes a stereo camera on a hull, at least one memory, and at least one processor coupled to the at least one memory. The at least one processor is configured or programmed to detect at least one object based on a first image and a second image captured by a first imaging unit and a second imaging unit of the stereo camera, and set one detected object as a tracking target in a third image captured by the first imaging unit, the second imaging unit or another imaging unit. The at least one processor is further configured or programmed to track the tracking target using a temporal change in a feature of the tracking target, and use at least one object detected based on the first image and the second image during tracking to correct the tracking result.
    Type: Grant
    Filed: September 10, 2021
    Date of Patent: September 10, 2024
    Assignee: YAMAHA HATSUDOKI KABUSHIKI KAISHA
    Inventor: Takahiro Ishii
  • Patent number: 12067737
    Abstract: A method of determining macrotexture of an object is disclosed which includes obtaining a plurality of stereo images from an object by an imaging device, generating a coordinate system for each image of the plurality of stereo images, detecting one or more keypoints each having a coordinate in each image of the plurality of stereo images, wherein the coordinate system is based on a plurality of ground control points (GCPs) with apriori position knowledge of each of the plurality of GCPs, generating a sparse point cloud based on the one or more keypoints, reconstructing a 3D dense point cloud of the object based on the generated sparse point cloud and based on neighboring pixels of each of the one or more keypoints and calculating the coordinates of each pixel of the 3D dense point cloud, and obtaining the macrotexture based on the reconstructed 3D dense point cloud of the object.
    Type: Grant
    Filed: May 4, 2023
    Date of Patent: August 20, 2024
    Assignee: Purdue Research Foundation
    Inventors: Jie Shan, Xiangxi Tian
  • Patent number: 12067746
    Abstract: A method for estimating a pose of an object includes: receiving, by a processor, an observed image depicting the object from a viewpoint; computing, by the processor, an instance segmentation map identifying a class of the object depicted in the observed image; loading, by the processor, a 3-D model corresponding to the class of the object; computing, by the processor, a rendered image of the 3-D model in accordance with an initial pose estimate of the object and the viewpoint of the observed image; computing, by the processor, a plurality of dense image-to-object correspondences between the observed image of the object and the 3-D model based on the observed image and the rendered image; and computing, by the processor, the pose of the object based on the dense image-to-object correspondences.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: August 20, 2024
    Assignee: Intrinsic Innovation LLC
    Inventors: Vage Taamazyan, Guy Michael Stoppi, Bradley Craig Anderson Brown, Agastya Kalra, Achuta Kadambi, Kartik Venkataraman
  • Patent number: 12062152
    Abstract: According to one implementation, a system for performing re-noising and neural network (NN) based image enhancement includes a computing platform having a processing hardware and a system memory storing a software code, a noise synthesizer, and an image restoration NN. The processing hardware is configured to execute the software code to receive a denoised image component and a noise component extracted from a degraded image, to generate, using the noise synthesizer and the noise component, synthesized noise corresponding to the noise component, and to interpolate, using the noise component and the synthesized noise, an output image noise. The processing hardware is further configured to execute the software code to enhance, using the image restoration NN, the denoised image component to provide an output image component, and to re-noise the output image component, using the output image noise, to produce an enhanced output image corresponding to the degraded image.
    Type: Grant
    Filed: August 27, 2021
    Date of Patent: August 13, 2024
    Assignee: Disney Enterprises, Inc.
    Inventors: Abdelaziz Djelouah, Shinobu Hattori, Christopher Richard Schroers
  • Patent number: 12051238
    Abstract: A computer-implemented system and method includes generating first pseudo segment data from a first augmented image and generating second pseudo segment data from a second augmented image. The first augmented image and the second augmented image are in a dataset along with other augmented images. A machine learning system is configured to generate pixel embeddings based on the dataset. The first pseudo segment data and the second pseudo segment data are used to identify a first set of segments that a given pixel belongs with respect to the first augmented image and the second augmented image. A second set of segments is identified across the dataset. The second set of segments do not include the given pixel. A local segmentation loss is computed for the given pixel based on the corresponding pixel embedding that involves attracting the first set of segments while repelling the second set of segments.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: July 30, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Wenbin He, Liang Gou, Liu Ren
  • Patent number: 12039690
    Abstract: A method for processing a stream of input images is described. A stream of input images that are from an image sensor is received. The stream comprises an initial sequence of input images including a subject having an initial orientation. A change in an angular orientation of the image sensor while receiving the stream of input images is determined. In response to determining the change in the angular orientation, a subsequent sequence of input images of the stream of input images is processed for rotation to counteract the change in the angular orientation of the image sensor and maintain the subject in the initial orientation. The stream of input images is transmitted to one or more display devices.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: July 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: David Justin Rappaport, Russell I. Sanchez, Jonathan L. Worsley, Kae-Ling J. Gurr, Karlton David Powell
  • Patent number: 12039747
    Abstract: The application relates to a polarity discrimination detection method and apparatus for multiple stacked electronic components and a device, and the method comprises the steps of: acquiring a collected image of a to-be-detected electronic component, and matching and positioning the collected image to obtain a positioning image; acquiring parameter data of a camera device, and carrying out stereo matching and image segmentation on the positioning image according to the parameter data to obtain three-dimensional coordinates; the to-be-detected electronic component to a polarity detection region through a manipulator according to the three-dimensional coordinates to acquire a detection image; analyzing the detection image to obtain polarity circle coordinates; and comparing the polarity circle coordinates with polarity circle standard coordinates arranged on the polarity detection region to obtain a polarity discrimination result.
    Type: Grant
    Filed: March 20, 2024
    Date of Patent: July 16, 2024
    Assignee: GUANGDONG UNIVERSITY OF TECHNOLOGY
    Inventors: Yaohua Deng, Shengyu Lin, Xiali Liu
  • Patent number: 12033392
    Abstract: Techniques are disclosed for generating a two-dimensional (2D) map of signal-to-noise ratio (SNR) values for sensor-acquired images. The techniques leverage the use of lookup tables (LUTs) to generate a transformation LUT that functions to map pixel values to SNR values. The transformation LUT may be generated by first generating an intermediate LUT that uses the operating parameters identified with the sensor to map pixel values to light level values. The light level values are then used together with an SNR model that outputs a prediction of electrons identified with a signal portion and a noise portion of images acquired by the sensor to thus map the pixel values to SNR values. The 2D map may be used to improve upon the accuracy of the classification of objects and/or scene characteristics for various applications.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: July 9, 2024
    Assignee: Mobileye Vision Technologies AG
    Inventor: Gabriel Bowers
  • Patent number: 12020419
    Abstract: An example wear detection system receives a plurality of images from a plurality of sensors associated with a work machine. Individual sensors of the plurality of sensors have respective fields-of-view different from other sensors of the plurality of sensors. The wear detection system identifies a first region of interest and second region of interest associated with the at least one GET. The wear detection system determines a first set of image points and a second set of images points for the at least one GET based on geometric parameters associated with the GET. The wear detection system determines a wear level or loss for the at least one GET based on the GET measurement.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: June 25, 2024
    Assignee: Caterpillar Inc.
    Inventors: Lawrence A Mianzo, Tod A Oblak, John M Plouzek, Raymond Alan Wise, Shawn Nainan Mathew, Daniel Paul Adler
  • Patent number: 12008760
    Abstract: Systems and methods of estimating movement of anatomical structures of a new patient include learning one or more deformation models from preoperative and intraoperative imaging data of a set of other patients and estimating movement of anatomical structures of the new patient based on the one or more deformation models and preoperative imaging data of the new patient. Estimating movement of the anatomical structures may include applying deformation models to map data derived from preoperative imaging data of a new patient to obtain deformed map data for each deformation model, and determining the deformation model that best fits the intraoperative imaging data of the new patient. Applying a deformation model to map data may include applying a transformation to the deformation model and interpolating to obtain a deformed map data. Registration is computed between locations of a medical device sampled during navigation of the medical device through the new patient and the original and deformed map data.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: June 11, 2024
    Assignee: Covidien LP
    Inventors: Nicolas J. Merlet, Oren P. Weingarten, Ariel Birenbaum, Alexander Nepomniashchy, Evgeni Kopel, Guy Alexandroni
  • Patent number: 11996105
    Abstract: Disclosed are an information processing method and a terminal device. The method comprises: acquiring first information, wherein the first information is information to be processed by a terminal device; calling an operation instruction in a calculation apparatus to calculate the first information so as to obtain second information; and outputting the second information. By means of the examples in the present disclosure, a calculation apparatus of a terminal device can be used to call an operation instruction to process first information, so as to output second information of a target desired by a user, thereby improving the information processing efficiency. The present technical solution has advantages of a fast computation speed and high efficiency.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: May 28, 2024
    Assignee: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Tianshi Chen, Shaoli Liu, Zai Wang, Shuai Hu
  • Patent number: 11977607
    Abstract: Disclosed are a CAM-based weakly supervised object localization device and method. The device includes: a feature map extractor configured to extract a feature map of a last convolutional layer in a convolutional neural network (CNN) in a process of applying an image to the CNN; a weight vector binarization unit configured to first binarize a weight vector of a linear layer in a process of sequentially applying the feature map to a pooling layer that generates a feature vector and the linear layer that generates a class label; a feature map binarization unit configured to second binarize the feature map based on the first binarized weight vector; and a class activation map generation unit configured to generate a class activation map for object localization based on the second binarized feature map.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: May 7, 2024
    Assignee: UIF (UNIVERSITY INDUSTRY FOUNDATION), YONSEI UNIVERSITY
    Inventors: Hye Ran Byun, Sanghuk Lee, Cheolhyun Mun, Pilhyeon Lee, Jewook Lee
  • Patent number: 11978245
    Abstract: The present disclosure discloses a method and apparatus for generating an image. A specific embodiment of the method comprises: acquiring at least two frames of facial images extracted from a target video; and inputting the at least two frames of facial images into a pre-trained generative model to generate a single facial image. The generative model updates a model parameter using a loss function in a training process, and the loss function is determined based on a probability of the single facial generative image being a real facial image and a similarity between the single facial generative image and a standard facial image. According to this embodiment, authenticity of the single facial image generated by the generative model may be enhanced, and then a quality of a facial image obtained based on the video is improved.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: May 7, 2024
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Tao He, Gang Zhang, Jingtuo Liu
  • Patent number: 11968372
    Abstract: A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: April 23, 2024
    Assignee: Avalon Holographics Inc.
    Inventors: Matthew Hamilton, Chuck Rumbolt, Donovan Benoit, Matthew Troke, Robert Lockyer
  • Patent number: 11964762
    Abstract: Subject matter regards generating a 3D point cloud and registering the 3D point cloud to the surface of the Earth (sometimes called “geo-locating”). A method can include capturing, by unmanned vehicles (UVs), image data representative of respective overlapping subsections of the object, registering the overlapping subsections to each other, and geo-locating the registered overlapping subsections.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 23, 2024
    Assignee: Raytheon Company
    Inventors: Torsten A. Staab, Steven B. Seida, Jody D. Verret, Richard W. Ely, Stephen J. Raif
  • Patent number: 11948303
    Abstract: A method and apparatus of objective assessment of images captured from a human gastrointestinal (GI) tract are disclosed. According to this method, one or more images captured using an endoscope when the endoscope is inside the human gastrointestinal (GI) tract are received. Whether there is any specific target object is checked. When one or more specific target objects in the images are detected: areas of the specific target objects in the images are determined; an objective assessment score is derived based on the areas of the specific target objects in a substantial number of images from the images; where the step of detecting the specific target objects is performed using an artificial intelligence process.
    Type: Grant
    Filed: June 19, 2021
    Date of Patent: April 2, 2024
    Assignee: CapsoVision Inc.
    Inventors: Kang-Huai Wang, Chenyu Wu, Gordon C. Wilson
  • Patent number: 11935266
    Abstract: A camera parameter estimation apparatus: takes three sets of three-dimensional coordinates pertaining to an object and two-dimensional coordinates corresponding to the three-dimensional coordinates, and transforms a coordinate system of the three-dimensional coordinates from a world coordinate system to a local coordinate system; calculates a linear transformation matrix based on a projection transformation expression from the transformed three-dimensional coordinates to the two-dimensional coordinates, calculates a coefficient of a quartic equation pertaining to any one of depths from a camera center to each three-dimensional coordinate, and calculates each depth; calculates the rotation matrix in the local coordinate system using each depth and the linear transformation matrix; calculates a translation vector in the local coordinate system from each depth based on the projection transformation expression; and calculates a rotation matrix and a translation vector in the world coordinate system by performing
    Type: Grant
    Filed: April 24, 2019
    Date of Patent: March 19, 2024
    Assignee: NEC CORPORATION
    Inventor: Gaku Nakano
  • Patent number: 11934488
    Abstract: The present disclosure provides a method and system for constructing a digital rock, and relates to the technical field of digital rocks. According to the method, a three-dimensional (3D) digital rock image that can reflect real rock information is obtained using an image scanning technology, and the image is preprocessed to obtain a digital rock training image for training a generative adversarial network (GAN). The trained GAN is stored to obtain a digital rock construction model. The stored digital rock construction model can be directly used to quickly construct a target digital rock image. This not only greatly reduces computational costs, but also reduces costs and time consumption for obtaining high-resolution sample images. In addition, the constructed target digital rock image can also reflect real rock information.
    Type: Grant
    Filed: September 10, 2020
    Date of Patent: March 19, 2024
    Assignee: China University of Petroleum (East China)
    Inventors: Yongfei Yang, Fugui Liu, Jun Yao, Huaisen Song, Kai Zhang, Lei Zhang, Hai Sun, Wenhui Song, Yuanbo Wang, Bozhao Xu