Patents Examined by Ping Y Hsieh
  • Patent number: 11790517
    Abstract: A subtle defect detection method based on coarse-to-fine strategy, including: (S1) acquiring data of an image to be detected via a charge-coupled device (CCD) camera; (S2) constructing a defect area location network and preprocessing the image to be detected to initially determine a defect position; (S3) constructing a defect point detection network; and training the defect point detection network by using a defect segmentation loss function; and (S4) subjecting subtle defects in the image to be detected to quantitative extraction and segmentation via the defect point detection network.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: October 17, 2023
    Assignee: Nanjing University of Aeronautics and Astronautics
    Inventors: Jun Wang, Zhongde Shan, Shuyi Jia, Dawei Li, Yuxiang Wu
  • Patent number: 11790534
    Abstract: The invention discloses an attention-based joint image and feature adaptive semantic segmentation method. First, the image adaptation procedure is used to transform the source domain image Xs to a target-domain-like image Xs-t with an appearance similar with the target domain image Xt, to reduce the domain gap between the source domain and the target domain at the image appearance level; then using the feature adaptation procedure to align the features between Xs-t and Xt in the semantic prediction space and the image generation space, respectively, to extract the domain-invariant features, to reduce the domain difference between Xs-t and Xt. In addition, the present invention introduces an attention module in the feature adaptation procedure to help the feature adaptation procedure pay more attention to image regions worthy of attention. Finally, combining the image adaptation procedure and the feature adaptation procedure in the end-to-end manner.
    Type: Grant
    Filed: May 16, 2023
    Date of Patent: October 17, 2023
    Assignee: WUHAN UNIVERSITY
    Inventors: Bo Du, Juhua Liu, Qihuang Zhong, Lifang'an Xiao
  • Patent number: 11783486
    Abstract: Generating images and videos depicting a human subject wearing textually defined attire is described. An image generation system receives a two-dimensional reference image depicting a person and a textual description describing target clothing in which the person is to be depicted as wearing. To maintain a personal identity of the person, the image generation system implements a generative model, trained using both discriminator loss and perceptual quality loss, which is configured to generate images from text. In some implementations, the image generation system is configured to train the generative model to output visually realistic images depicting the human subject in the target clothing. The image generation system is further configured to apply the trained generative model to process individual frames of a reference video depicting a person and output frames depicting the person wearing textually described target clothing.
    Type: Grant
    Filed: December 16, 2021
    Date of Patent: October 10, 2023
    Assignee: Adobe Inc.
    Inventors: Viswanathan Swaminathan, Gang Wu, Akshay Malhotra
  • Patent number: 11783229
    Abstract: A method, system and computer readable medium for generating a cognitive insight comprising: receiving content element data, the content element data representing a content element, the content element comprising an element of a corpus of content; performing a cognitive learning operation on the content element data, the cognitive learning operation identifying descriptive information associated with the content element; associating a cognitive attribute with the content element using the descriptive information associated with the content element.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: October 10, 2023
    Assignee: Tecnotree Technologies, Inc.
    Inventors: Neeraj Chawla, Matthew Sanchez, Andrea M. Ricaurte, Dilum Ranatunga, Ayan Acharya, Hannah R. Lindsley
  • Patent number: 11776234
    Abstract: Example systems and methods of selection of video frames using a machine learning (ML) predictor program are disclosed. The ML predictor program may generate predicted cropping boundaries for any given input image. Training raw images associated with respective sets of training master images indicative of cropping characteristics for the training raw image may be input to the ML predictor, and the ML predictor program trained to predict cropping boundaries for raw image based on expected cropping boundaries associated training master images. At runtime, the trained ML predictor program may be applied to a sequence of video image frames to determine for each respective video image frame a respective score corresponding to a highest statistical confidence associated with one or more subsets of cropping boundaries predicted for the respective video image frame. Information indicative of the respective video image frame having the highest score may be stored or recorded.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: October 3, 2023
    Assignee: Gracenote, Inc.
    Inventors: Aneesh Vartakavi, Casper Lützhøft Christensen
  • Patent number: 11776322
    Abstract: The present application relates to the technical field of image recognition and provides a pinch gesture detection and recognition method, which is applied to an electronic device and includes: acquiring, in real time, image data of each frame in a video to be detected; performing a hand location detection on the image data based on a pre-trained hand detection model, to determine a hand position of the image data; performing a skeleton point recognition at the hand position based on the pre-trained skeleton point recognition model, to determine a preset number of skeleton points at the hand position; and determining whether a hand corresponding to the image data is in a pinch gesture or not according to information of a distance between the skeleton points of preset fingers.
    Type: Grant
    Filed: August 4, 2022
    Date of Patent: October 3, 2023
    Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.
    Inventor: Tao Wu
  • Patent number: 11776131
    Abstract: Systems and methods for eye image segmentation and image quality estimation are disclosed. In one aspect, after receiving an eye image, a device such as an augmented reality device can process the eye image using a convolutional neural network with a merged architecture to generate both a segmented eye image and a quality estimation of the eye image. The segmented eye image can include a background region, a sclera region, an iris region, or a pupil region. In another aspect, a convolutional neural network with a merged architecture can be trained for eye image segmentation and image quality estimation. In yet another aspect, the device can use the segmented eye image to determine eye contours such as a pupil contour and an iris contour. The device can use the eye contours to create a polar image of the iris region for computing an iris code or biometric authentication.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: October 3, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Vijay Badrinarayanan
  • Patent number: 11774365
    Abstract: Systems and methods implement of high-speed delay scanning for spectroscopic SRS imaging characterized by scanning a first pulsed beam across a stepwise reflective surface (such as a stepwise mirror or a reflective blazed grating) in a Littrow configuration to generate near continuous temporal delays relative to a second pulsed beam. Systems and methods also implement deep learning techniques for image restoration of spectroscopic SRS images using a trained encoder-decoder convolution neural network (CNN) which in some embodiments may be designed as a spatial-spectral residual net (SS-ResNet) characterized by two parallel filters including a first convolution filter on the spatial domain and a second convolution filter on the spectral domain.
    Type: Grant
    Filed: January 6, 2022
    Date of Patent: October 3, 2023
    Assignee: Trustees of Boston University
    Inventors: Ji-Xin Cheng, Haonan Lin
  • Patent number: 11776109
    Abstract: A layer thickness measurement system includes a support to hold a substrate, an optical sensor to capture a color image of at least a portion of the substrate, and a controller. The controller is configured to receive the color image from the optical sensor, perform a color correction on the color image to generate an adjusted color image having increased color contrast, determine a coordinate of the pixel in a coordinate space of at least two dimensions including a first color channel and a second color channel from color data in the adjusted color image for each of the adjusted color image, and calculate a value representative of a thickness based on the coordinate of the pixel of the adjusted color image in the coordinate space.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: October 3, 2023
    Assignee: Applied Materials, Inc.
    Inventors: Nojan Motamedi, Dominic J. Benvegnu, Boguslaw A. Swedek, Martin A. Josefowicz
  • Patent number: 11769317
    Abstract: Disclosed herein is a method of automatically obtaining training images to train a machine learning model that improves image quality. The method may comprise analyzing a plurality of patterns of data relating to a layout of a product to identify a plurality of training locations on a sample of the product to use in relation to training the machine learning model. The method may comprise obtaining a first image having a first quality for each of the plurality of training locations, and obtaining a second image having a second quality for each of the plurality of training locations, the second quality being higher than the first quality. The method may comprise using the first image and the second image to train the machine learning model.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: September 26, 2023
    Assignee: ASML Netherlands B.V.
    Inventors: Wentian Zhou, Liangjiang Yu, Teng Wang, Lingling Pu, Wei Fang
  • Patent number: 11763471
    Abstract: A method for large scene elastic semantic representation and self-supervised light field reconstruction is provided. The method includes acquiring a first depth map set corresponding to a target scene, in which the first depth map set includes a first depth map corresponding to at least one 5 angle of view; inputting the first depth map set into a target elastic semantic reconstruction model to obtain a second depth map set, in which the second depth map set includes a second depth map corresponding to the at least one angle of view; and fusing the second depth map corresponding to the at least one angle of view to obtain a target scene point cloud corresponding to the target scene.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: September 19, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Jinzhi Zhang, Ruofan Tang
  • Patent number: 11763480
    Abstract: In one embodiment, a method includes receiving an image generated by a camera associated with a vehicle. The image includes a point of interest (POI) associated with a physical object. The method also includes determining a number of pixels from the POI of the image to an edge of the image. The edge of the image represents a location of the camera. The method further includes determining an offset distance from the POI to a Global Positioning System (GPS) unit associated with the vehicle using the number of pixels.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: September 19, 2023
    Assignee: BNSF RAILWAY COMPANY
    Inventors: Michael Saied Saniei, Xiaoyan Si, Siva Prasad Vysyaraju
  • Patent number: 11756219
    Abstract: A method for using an artificial neural network associated with an agent to estimate depth, includes receiving, at the artificial neural network, an input image captured via a sensor associated with the agent. The method also includes upsampling, at each decoding layer of a plurality of decoding layers of the artificial neural network, decoded features associated with the input image to a resolution associated with a final output of the artificial neural network. The method further includes concatenating, at each decoding layer, the upsampled decoded features with features obtained at a convolution layer associated with a respective decoding layer. The method still further includes estimating, at a recurrent module of the artificial neural network, a depth of the input image based on receiving the concatenated upsampled decoded features from each decoding layer. The method also includes controlling an action of an agent based on the depth estimate.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: September 12, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Vitor Guizilini, Adrien David Gaidon
  • Patent number: 11756208
    Abstract: In implementations of object boundary generation, a computing device implements a boundary system to receive a mask defining a contour of an object depicted in a digital image, the mask having a lower resolution than the digital image. The boundary system maps a curve to the contour of the object and extracts strips of pixels from the digital image which are normal to points of the curve. A sample of the digital image is generated using the extracted strips of pixels which is input to a machine learning model. The machine learning model outputs a representation of a boundary of the object by processing the sample of the digital image.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: September 12, 2023
    Assignee: Adobe Inc.
    Inventors: Brian Lynn Price, Peng Zhou, Scott David Cohen, Gregg Darryl Wilensky
  • Patent number: 11737831
    Abstract: A camera tracking system for computer assisted navigation during surgery. The camera tracking system includes a processor operative to receive streams of video frames from tracking cameras which image a plurality of physical objects arranged as a reference array. For each of the physical objects imaged in a sequence of the video frames, that processor determines a set of coordinates for the physical object over the sequence of the video frames. For each of the physical objects, the processor generates an arithmetic combination of the set of coordinates for the physical object. The processor generates an array template identifying coordinates of the physical objects based on the arithmetic combinations of the sets of coordinates for the physical objects, and tracks pose of the physical objects of the reference array over time based on comparison of the array template to the reference array imaged in the streams of video frames.
    Type: Grant
    Filed: September 2, 2020
    Date of Patent: August 29, 2023
    Assignee: Globus Medical Inc.
    Inventors: Neil R. Crawford, Thomas Calloway
  • Patent number: 11741621
    Abstract: A method and system for detecting plane information are provided. The method includes: obtaining point cloud information of a physical environment of a user; performing an iterative regressing operation on the point cloud information to fit all plane information corresponding to the physical environment; merging all the plane information according to a preset rule to obtain a merged plane information set; performing plane segmentation on the plane information set based on a pre-trained plane segmentation model to obtain segmented plane information; and filtering the segmented plane information to determine all target plane information corresponding to the physical environment.
    Type: Grant
    Filed: July 29, 2022
    Date of Patent: August 29, 2023
    Assignee: QINGDAO PICO TECHNOLOGY CO., LTD.
    Inventor: Tao Wu
  • Patent number: 11734918
    Abstract: An object model learning method includes: in an object identification model forming a convolutional neural network and a warp structure warping a feature map extracted in the convolutional neural network to a different coordinate system, preparing, in the warp structure, a warp parameter for relating a position in the different coordinate system to a position in a coordinate system before warp; and learning the warp parameter to input a capture image in which an object is captured to the object identification model and output a viewpoint conversion map in which the object is identified in the different coordinate system.
    Type: Grant
    Filed: November 25, 2020
    Date of Patent: August 22, 2023
    Assignee: DENSO CORPORATION
    Inventors: Kunihiko Chiba, Yusuke Sekikawa, Koichiro Suzuki
  • Patent number: 11727051
    Abstract: One example method involves operations for receiving a query that includes a keyword. The search query is associated with a user profile. Operations further include a recommendation matrix that includes a set of images based on (a) an area of interest determined from the search query and the user profile and (b) content tags associated with the images. In addition, operations include calculating a recommendation score for a candidate image included in the recommendation matrix. The recommendation score includes a weighted average of row vectors of the recommendation matrix. Further, operations involve including the candidate image in a search result for the search query based on the recommendation score. Additionally, operations include generating, for display, a search result that includes the candidate image.
    Type: Grant
    Filed: November 19, 2020
    Date of Patent: August 15, 2023
    Assignee: Adobe Inc.
    Inventors: Mansi Nagpal, Shreya Mahapatra, Sukhmeet Singh
  • Patent number: 11727534
    Abstract: In an aspect for generating device-specific OCT image, one or more processors may be configured for receiving, at a unified domain generator, first image data corresponding to OCT image scans captured by one or more OCT devices; processing, by the unified domain generator, the first image data to generate second image data corresponding to a unified representation of the OCT image scans; determining by a unified discriminator, third image data corresponding to a quality subset of the unified representation of the OCT image scans having a base resolution satisfying a first condition and a base noise type satisfying a second condition; and processing, using a conditional generator, the third image data to generate fourth image data corresponding to device-specific OCT image scans having a device-specific resolution satisfying a third condition and a device-specific noise type satisfying a fourth condition.
    Type: Grant
    Filed: December 8, 2020
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Suman Sedai, Stefan Renard Maetschke, Bhavna Josephine Antony, Hsin-Hao Yu, Rahil Garnavi
  • Patent number: 11727673
    Abstract: A visual analysis method for cable element identification includes steps of constructing and labeling a picture data set, preparing a training data set and training a model, to train the preset identification and analysis model, such that the preset identification and analysis model can have accuracy of identification of cable elements; then, cable element information existing in a target image screen is identified by the completely trained preset identification and analysis model, so as to label a target picture; in the analysis method, the produced and manufactured cable elements can be shot, and then, shot pictures are identified and analyzed by using the preset identification and analysis model, such that a structural quality condition of each cable element is rapidly and comprehensively determined, possible structural defects of each cable element can be conveniently and accurately known, and the cable elements with unqualified quality can be screened out in time.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: August 15, 2023
    Assignee: SOOCHOW UNIVERSITY
    Inventors: Xizhao Luo, Tingchen Wang, Xiaoxiao Wang