Patents Examined by Kathleen Y Dulaney
-
Patent number: 10824920Abstract: The present disclosure provides a method and apparatus for recognizing video fine granularity, a computer device and a storage medium, wherein the method comprises: performing sampling processing for video to be recognized to obtain n frames of images, n being a positive integer larger than one; respectively obtaining a feature graph of each frame of image, and determining a summary feature according to respective feature graphs; determining a fine granularity recognition result of a target in the video according to the summary feature. The solution of the present disclosure may be applied to enhance the accuracy of recognition result.Type: GrantFiled: April 18, 2018Date of Patent: November 3, 2020Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.Inventors: Xiao Tan, Feng Zhou, Hao Sun
-
Patent number: 10813734Abstract: Feedback data useful in prosthodontic procedures associated with the intra oral cavity is provided. First, a 3D numerical model of the target zone in the intra oral cavity is provided, and this is manipulated so as to extract particular data that may be useful in a particular procedure, for example data relating to the finish line or to the shape and size of a preparation. The relationship between this data and the procedure is then determined, for example the clearance between the preparation and the intended crown. Feedback data, indicative of this relationship, is then generated, for example whether the preparation geometry is adequate for the particular type of prosthesis.Type: GrantFiled: April 26, 2019Date of Patent: October 27, 2020Assignee: ALIGN TECHNOLOGY, INC.Inventors: Avi Kopelman, Eldad Taub
-
Patent number: 10783622Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.Type: GrantFiled: April 25, 2018Date of Patent: September 22, 2020Assignee: ADOBE INC.Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
-
Patent number: 10783394Abstract: A method, computer readable medium, and system are disclosed to generate coordinates of landmarks within images. The landmark locations may be identified on an image of a human face and used for emotion recognition, face identity verification, eye gaze tracking, pose estimation, etc. A transform is applied to input image data to produce transformed input image data. The transform is also applied to predicted coordinates for landmarks of the input image data to produce transformed predicted coordinates. A neural network model processes the transformed input image data to generate additional landmarks of the transformed input image data and additional predicted coordinates for each one of the additional landmarks. Parameters of the neural network model are updated to reduce differences between the transformed predicted coordinates and the additional predicted coordinates.Type: GrantFiled: June 12, 2018Date of Patent: September 22, 2020Assignee: NVIDIA CorporationInventors: Pavlo Molchanov, Stephen Walter Tyree, Jan Kautz, Sina Honari
-
Patent number: 10783393Abstract: A method, computer readable medium, and system are disclosed for sequential multi-tasking to generate coordinates of landmarks within images. The landmark locations may be identified on an image of a human face and used for emotion recognition, face identity verification, eye gaze tracking, pose estimation, etc. A neural network model processes input image data to generate pixel-level likelihood estimates for landmarks in the input image data and a soft-argmax function computes predicted coordinates of each landmark based on the pixel-level likelihood estimates.Type: GrantFiled: June 12, 2018Date of Patent: September 22, 2020Assignee: NVIDIA CorporationInventors: Pavlo Molchanov, Stephen Walter Tyree, Jan Kautz, Sina Honari
-
Patent number: 10783632Abstract: Machine learning systems and methods are disclosed for prediction of wound healing, such as for diabetic foot ulcers or other wounds, and for assessment implementations such as segmentation of images into wound regions and non-wound regions. Systems for assessing or predicting wound healing can include a light detection element configured to collect light of at least a first wavelength reflected from a tissue region including a wound, and one or more processors configured to generate an image based on a signal from the light detection element having pixels depicting the tissue region, determine reflectance intensity values for at least a subset of the pixels, determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values, and generate a predicted or assessed healing parameter associated with the wound over a predetermined time interval.Type: GrantFiled: January 9, 2020Date of Patent: September 22, 2020Assignee: SPECTRAL MD, INC.Inventors: Wensheng Fan, John Michael DiMaio, Jeffrey E. Thatcher, Peiran Quan, Faliu Yi, Kevin Plant, Ronald Baxter, Brian McCall, Zhicun Gao, Jason Dwight
-
Patent number: 10769485Abstract: A framebuffer-less system of convolutional neural network (CNN) includes a region of interest (ROI) unit that extracts features, according to which a region of interest in an input image frame is generated; a convolutional neural network (CNN) unit that processes the region of interest of the input image frame to detect an object; and a tracking unit that compares the features extracted at different times, according to which the CNN unit selectively processes the input image frame.Type: GrantFiled: June 19, 2018Date of Patent: September 8, 2020Assignee: Himax Technologies LimitedInventor: Der-Wei Yang
-
Patent number: 10762385Abstract: In a computer-implemented method and associated tangible non-transitory computer-readable medium, an image of a damaged vehicle may be analyzed to generate a repair estimate. A dataset populated with digital images of damaged vehicles and associated claim data may be used to train a deep learning neural network to learn damaged vehicle image characteristics that are predictive of claim data characteristics, and a predictive similarity model may be generated. Using the predictive similarity model, one or more similarity scores may be generated for a digital image of a newly damaged vehicle, indicating its similarity to one or more digital images of damaged vehicles with known damage level, repair time, and/or repair cost. A repair estimate may be generated for the newly damaged vehicle based on the claim data associated with images that are most similar to the image of the newly damaged vehicle.Type: GrantFiled: June 29, 2018Date of Patent: September 1, 2020Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: He Yang, Bradley A. Sliz, Carlee A. Clymer, Jennifer Malia Andrus
-
Patent number: 10762603Abstract: Systems and methods for image noise reduction are provided. The methods may include obtaining first image data, determining a restriction or a gradient of the first image data, determining a regularization parameter for the first image data based on the restriction or the gradient, generating second image data based on the regularization parameter and the first image data, and generating a regularized image based on the second image data.Type: GrantFiled: May 19, 2017Date of Patent: September 1, 2020Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Stanislav Zabic, Zhicong Yu
-
Patent number: 10754139Abstract: A three-dimensional position information acquiring method includes acquiring an image of a first optical image; thereafter acquiring an image of a second optical image; and performing a computation using image data of the first and second optical images. Acquisition of the image of the first optical image is based on light beams having passed through a first area. Acquisition of the image of the second optical image is based on light beams having passed through a second area. The positions of the centers of the first and second areas are both away from the optical axis of an optical system in a plane perpendicular to said optical axis. The first and second areas respectively include at least portions that do not overlap with each other. Three-dimensional position information about an observed object is acquired by the computation. The first and second areas are formed at rotationally symmetric positions.Type: GrantFiled: May 11, 2018Date of Patent: August 25, 2020Assignee: OLYMPUS CORPORATIONInventor: Hiroshi Ishiwata
-
Patent number: 10755133Abstract: A system and method for identifying line Mura defects on a display. The system is configured to generate a filtered image by preprocessing an input image of a display using at least one filter. The system then identifies line Mura candidates by converting the filtered image to a binary image, counting line components along a slope in the binary image, and marking a potential candidate location when the line components along the slope exceed a line threshold. Image patches are then generated with the candidate locations at the center of each image patch. The image patches are then classified using a machine learning classifier.Type: GrantFiled: April 18, 2018Date of Patent: August 25, 2020Assignee: Samsung Display Co., Ltd.Inventor: Janghwan Lee
-
Patent number: 10748271Abstract: There are provided system and method of classifying defects in a specimen. The method includes: obtaining one or more defect clusters detected on a defect map of the specimen, each cluster characterized by a set of cluster attributes comprising spatial attributes including spatial density indicative of density of defects in one or more regions accommodating the cluster, each given defect cluster being detected at least based on the spatial density thereof meeting a criterion; for each cluster, applying a cluster classifier to a respective set of cluster attributes thereof to associate the cluster with one or more labels of a predefined set of labels, wherein the cluster classifier is trained using cluster training data; and identifying DOI in each cluster by performing a defect filtration for each cluster using one or more filtering parameters specified in accordance with the label of the cluster.Type: GrantFiled: April 25, 2018Date of Patent: August 18, 2020Assignee: APPLIED MATERIALS ISRAEL LTD.Inventors: Assaf Asbag, Orly Zvitia, Idan Kaizerman, Efrat Rosenman
-
Patent number: 10706505Abstract: A system and method for generating a range image using sparse depth data is disclosed. The method includes receiving, by a controller, image data of a scene. The image data includes a first set of pixels. The method also includes receiving, by the controller, a sparse depth data of the scene. The sparse depth data includes a second set of pixels, and the number of the second set of pixels is less than the number of first set of pixels. The method also includes combining the image data and the sparse depth data into a combined data. The method also includes generating a range image using the combined data.Type: GrantFiled: January 24, 2018Date of Patent: July 7, 2020Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Wei Tong, Shuqing Zeng, Upali P. Mudalige
-
Patent number: 10706527Abstract: A correction method according to an embodiment includes illuminating an object to be inspected by using critical illumination by illumination light L11 generated by a light source 11, concentrating light from the object to be inspected illuminated by the illumination light L11 and acquiring image data of the object to be inspected by detecting the concentrated light by a first detector 23, concentrating part of the illumination light L11, and acquiring image data of a brightness distribution of the illumination light L11 by detecting the concentrated illumination light L11 by a second detector 33, and correcting the image data of the object to be inspected based on the image data of the brightness distribution.Type: GrantFiled: March 23, 2018Date of Patent: July 7, 2020Assignee: LASERTEC CORPORATIONInventors: Tsunehito Kohyama, Haruhiko Kusunose, Kiwamu Takehisa, Hiroki Miyai, Itaru Matsugu
-
Patent number: 10690494Abstract: A 3D scanner system includes a scanning device capable of recording first and second data sets of a surface of an object when operating in a first configuration and a second configuration, respectively. A measurement unit is configured for measuring a distance from the scanning device to the surface. A control controls an operation of the scanning device based on the distance measured by the measurement unit, where the scanning device operates in the first configuration when the measured distance is within a first range of distances from the surface and the scanning device operates in the second configuration when the measured distance is within a second range of distances; and a data processor is configured to combine one or more first data sets and one or more second data sets to create a combined virtual 3D model of the object surface.Type: GrantFiled: March 5, 2019Date of Patent: June 23, 2020Assignee: 3SHAPE A/SInventors: Nikolaj Deichmann, Mike Van der Poel, Karl-Josef Hollenbeck, Rune Fisker
-
Patent number: 10679082Abstract: Real-time facial recognition is augmented with a machine-learning process that samples pixels from images captured for the physical environmental background of a device, which captures an image of a user's face for facial authentication. The background pixel points that are present in a captured image of a user's face from a camera of the device are authenticated with the image of the user's face. The value of the background pixel points are compared against the expected values for the background pixel points provided by the on-going machine-learning process for the background.Type: GrantFiled: September 28, 2017Date of Patent: June 9, 2020Assignee: NCR CorporationInventors: Weston Lee Hecker, Nir Veltman, Yehoshua Zvi Licht
-
Patent number: 10667530Abstract: An automated process assesses an amount of meat remaining on a trimmed animal carcass by generating image data of the carcass and processing the image data in a computer to calculate the amount of meat remaining on the carcass. The automated process can be carried out using an equation developed by counting pixels in an area of interest in images of a plurality of reference trimmed carcasses from which the remaining meat is thereafter scraped and weighed, to produce data points used to develop the equation which is then used to calculate the amount of meat remaining on a trimmed carcass as it proceeds down a processing line.Type: GrantFiled: December 20, 2017Date of Patent: June 2, 2020Assignee: Cryovac, LLCInventors: Daniel Healey, Rajeev Sinha, Lewis Webb, Keith Johnson, James Mize
-
Patent number: 10670395Abstract: A 3D scanner system includes a scanning device capable of recording first and second data sets of a surface of an object when operating in a first configuration and a second configuration, respectively. A measurement unit is configured for measuring a distance from the scanning device to the surface. A control controls an operation of the scanning device based on the distance measured by the measurement unit, where the scanning device operates in the first configuration when the measured distance is within a first range of distances from the surface and the scanning device operates in the second configuration when the measured distance is within a second range of distances; and a data processor is configured to combine one or more first data sets and one or more second data sets to create a combined virtual 3D model of the object surface.Type: GrantFiled: March 7, 2017Date of Patent: June 2, 2020Assignee: 3SHAPE A/SInventors: Nikolaj Deichmann, Mike Van Der Poel, Karl-Josef Hollenbeck, Rune Fisker
-
Patent number: 10672119Abstract: In order to provide an inspection device capable of quantitatively evaluating a pattern related to a state of a manufacturing process or performance of an element, it is assumed that an inspection device includes an image analyzing unit that analyzes a top-down image of a sample in which columnar patterns are formed at a regular interval, in which an image analyzing unit 240 includes a calculation unit 243 that obtains a major axis, a minor axis, an eccentricity, and an angle formed by a major axis direction with an image horizontal axis direction of the approximated ellipse as a first index and a Cr calculation unit 248 that obtains a circumferential length of an outline of a columnar pattern on the sample and a value obtained by dividing a square of the circumferential length by a value obtained by multiplying an area surrounded by the outline and 4? as a second index.Type: GrantFiled: September 10, 2015Date of Patent: June 2, 2020Assignee: HITACHI HIGH-TECH CORPORATIONInventors: Atsuko Yamaguchi, Masami Ikota, Kazuhisa Hasumi
-
Patent number: 10664961Abstract: The present invention provides a technology that separates a low-contrast-ratio image into sublayer images, classifies each sublayer image into several categories in accordance with the characteristics of each sublayer image, and learns a transformation matrix representing a relationship between the low-contrast-ratio image and a high-contrast-ratio image for each category. In addition, the present invention provides a technology that separates an input low-contrast-ratio image into sublayer images, selects a category corresponding to each sublayer image, and applies a learned transformation matrix to generate a high.Type: GrantFiled: June 27, 2017Date of Patent: May 26, 2020Assignees: SILICON WORKS CO., LTD., Korea Advanced Institute of Science and TechnologyInventors: Yong Woo Kim, Sang Yeon Kim, Woo Suk Ha, Mun Churl Kim, Dae Eun Kim