Patents Examined by Kathleen Y Dulaney
  • Patent number: 10970823
    Abstract: A system for video anomaly detection partitions the input video into a set of input spatio-temporal regions according to parameters of the spatio-temporal regions of the training video indicative of a number of regions in each video frame defining a spatial dimension of each of the spatio-temporal regions and a number of video frames defining a temporal dimension of each of the spatio-temporal regions, and determines blurred, thresholded difference images for each of the input spatio-temporal regions to produce a set of blurred, thresholded difference images.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: April 6, 2021
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventor: Michael Jones
  • Patent number: 10957045
    Abstract: Optimizations are provided for segmenting tissue objects included in an ultrasound image. Initially, raw pixel data is received. Here, each pixel corresponds to ultrasound information. This raw pixel data is processed through a first fully convolutional network to generate a first segmentation label map. This first map includes a first set of objects that have been segmented into a coarse segmentation class. Then, this first map is processed through a second fully convolutional network to generate a second segmentation label map. This second map is processed using the raw pixel data as a base reference. Further, this second map includes a second set of objects that have been segmented into a fine segmentation class. Then, a contour optimization algorithm is applied to at least one of the second set of objects in order to refine that object's contour boundary. Subsequently, that object is identified as corresponding to a lymph node.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: March 23, 2021
    Assignees: University of Notre Dame du Lac, Honk Kong Polytechnic University, Chinese University of Hong Kong
    Inventors: Danny Ziyi Chen, Yizhe Zhang, Lin Yang, Michael Tin-Cheung Ying, Anil Tejbhan Ahuja
  • Patent number: 10922580
    Abstract: A method includes receiving, by a device, a first image of a scene and a second image of at least a portion of the scene. The method includes identifying a first plurality of features from the first image and comparing the first plurality of features to a second plurality of features from the second image to identify a common feature. The method includes determining a particular subset of pixels that corresponds to the common feature, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image. The method also includes generating a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: February 16, 2021
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Amy Ruth Reibman, Zhu Liu, Lee Begeja, Bernard S. Renger, David Crawford Gibbon, Behzad Shahraray, Raghuraman Gopalan, Eric Zavesky
  • Patent number: 10922393
    Abstract: Systems and methods for iris authentication are disclosed. In one aspect, a deep neural network (DNN) with a triplet network architecture can be trained to learn an embedding (e.g., another DNN) that maps from the higher dimensional eye image space to a lower dimensional embedding space. The DNN can be trained with segmented iris images or images of the periocular region of the eye (including the eye and portions around the eye such as eyelids, eyebrows, eyelashes, and skin surrounding the eye). With the triplet network architecture, an embedding space representation (ESR) of a person's eye image can be closer to the ESRs of the person's other eye images than it is to the ESR of another person's eye image. In another aspect, to authenticate a user as an authorized user, an ESR of the user's eye image can be sufficiently close to an ESR of the authorized user's eye image.
    Type: Grant
    Filed: April 26, 2017
    Date of Patent: February 16, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Alexey Spizhevoy, Adrian Kaehler, Gary Bradski
  • Patent number: 10909723
    Abstract: A hyperspectral imaging spectrophotometer and system, with calibration, data collection, and image processing methods designed to match human visual perception and color matching of complex colored objects.
    Type: Grant
    Filed: June 13, 2018
    Date of Patent: February 2, 2021
    Assignee: X-RITE, INCORPORATED
    Inventors: Christian Boes, Thomas Richardson, Richard John Van Andel, David Bosscher, David Salyer
  • Patent number: 10878269
    Abstract: Embodiments of the present disclosure pertain to extracting data corresponding to particular data types using neural networks. In one embodiment, a method includes receiving an image in a backend system, sending the image to an optical character recognition (OCR) component, and in accordance therewith, receiving a plurality of characters recognized in the image, sequentially processing the characters with a recurrent neural network to produce a plurality of outputs for each character, sequentially processing the plurality of outputs for each character with a masking neural network layer, and in accordance therewith, generating a first plurality of probabilities, wherein each probability corresponds to a particular character in the plurality of characters, selecting a second plurality of adjacent probabilities from the first plurality of probabilities that are above a threshold, and translating the second plurality of adjacent probabilities into output characters.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: December 29, 2020
    Assignee: SAP SE
    Inventor: Michael Stark
  • Patent number: 10878588
    Abstract: Implementations relate to detecting/replacing transient obstructions from high-elevation digital images. A digital image of a geographic area includes pixels that align spatially with respective geographic units of the geographic area. Analysis of the digital image may uncover obscured pixel(s) that align spatially with geographic unit(s) of the geographic area that are obscured by transient obstruction(s). Domain fingerprint(s) of the obscured geographic unit(s) may be determined across pixels of a corpus of digital images that align spatially with the one or more obscured geographic units. Unobscured pixel(s) of the same/different digital image may be identified that align spatially with unobscured geographic unit(s) of the geographic area. The unobscured geographic unit(s) also may have domain fingerprint(s) that match the domain fingerprint(s) of the obscured geographic unit(s).
    Type: Grant
    Filed: June 22, 2018
    Date of Patent: December 29, 2020
    Assignee: X DEVELOPMENT LLC
    Inventors: Jie Yang, Cheng-en Guo, Elliott Grant
  • Patent number: 10854072
    Abstract: An automatic calibration system for a traffic system includes at least one position determining device, at least one image capturing device, a matching and tagging module, an image analysis module, and a calibration module. The position determining device detects a vehicle in violation of traffic rules and activates the image capturing device to capture images of the vehicle. The vehicle is matched and tagged in the images according to vehicle-related information detected by the position determining device by the matching and tagging module. The image analysis module analyses a plurality of images selected from the images to obtain an analysis result. The analysis result is compared with the vehicle-related information by a processor. If the analysis result is different than the vehicle-related information, the position determining device is calibrated by the calibration module. If the analysis result is the same as the vehicle-related information, a calibration is not performed.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: December 1, 2020
    Inventor: Akif Ekin
  • Patent number: 10846819
    Abstract: The present invention includes an apparatus and method for determining time-varying stress experienced by a structure comprising: obtaining images that include the structure; segmenting the second and any subsequent images to include the “static” portions that are identified from the first image; computing with a processor the affine transformations between the first and second, and optionally subsequent images, sequence of images; estimating a deformation (i.e. translation and rotation) undergone by the structure; and converting the deformation to estimate the structural stress by using one or more scaling functions) to generate the time-varying stress experienced by the structure.
    Type: Grant
    Filed: April 12, 2018
    Date of Patent: November 24, 2020
    Assignee: Southern Methodist University
    Inventors: Dinesh Rajan, Brett Story, Joseph Camp
  • Patent number: 10824920
    Abstract: The present disclosure provides a method and apparatus for recognizing video fine granularity, a computer device and a storage medium, wherein the method comprises: performing sampling processing for video to be recognized to obtain n frames of images, n being a positive integer larger than one; respectively obtaining a feature graph of each frame of image, and determining a summary feature according to respective feature graphs; determining a fine granularity recognition result of a target in the video according to the summary feature. The solution of the present disclosure may be applied to enhance the accuracy of recognition result.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: November 3, 2020
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Xiao Tan, Feng Zhou, Hao Sun
  • Patent number: 10813734
    Abstract: Feedback data useful in prosthodontic procedures associated with the intra oral cavity is provided. First, a 3D numerical model of the target zone in the intra oral cavity is provided, and this is manipulated so as to extract particular data that may be useful in a particular procedure, for example data relating to the finish line or to the shape and size of a preparation. The relationship between this data and the procedure is then determined, for example the clearance between the preparation and the intended crown. Feedback data, indicative of this relationship, is then generated, for example whether the preparation geometry is adequate for the particular type of prosthesis.
    Type: Grant
    Filed: April 26, 2019
    Date of Patent: October 27, 2020
    Assignee: ALIGN TECHNOLOGY, INC.
    Inventors: Avi Kopelman, Eldad Taub
  • Patent number: 10783393
    Abstract: A method, computer readable medium, and system are disclosed for sequential multi-tasking to generate coordinates of landmarks within images. The landmark locations may be identified on an image of a human face and used for emotion recognition, face identity verification, eye gaze tracking, pose estimation, etc. A neural network model processes input image data to generate pixel-level likelihood estimates for landmarks in the input image data and a soft-argmax function computes predicted coordinates of each landmark based on the pixel-level likelihood estimates.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: September 22, 2020
    Assignee: NVIDIA Corporation
    Inventors: Pavlo Molchanov, Stephen Walter Tyree, Jan Kautz, Sina Honari
  • Patent number: 10783632
    Abstract: Machine learning systems and methods are disclosed for prediction of wound healing, such as for diabetic foot ulcers or other wounds, and for assessment implementations such as segmentation of images into wound regions and non-wound regions. Systems for assessing or predicting wound healing can include a light detection element configured to collect light of at least a first wavelength reflected from a tissue region including a wound, and one or more processors configured to generate an image based on a signal from the light detection element having pixels depicting the tissue region, determine reflectance intensity values for at least a subset of the pixels, determine one or more quantitative features of the subset of the plurality of pixels based on the reflectance intensity values, and generate a predicted or assessed healing parameter associated with the wound over a predetermined time interval.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: September 22, 2020
    Assignee: SPECTRAL MD, INC.
    Inventors: Wensheng Fan, John Michael DiMaio, Jeffrey E. Thatcher, Peiran Quan, Faliu Yi, Kevin Plant, Ronald Baxter, Brian McCall, Zhicun Gao, Jason Dwight
  • Patent number: 10783622
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Patent number: 10783394
    Abstract: A method, computer readable medium, and system are disclosed to generate coordinates of landmarks within images. The landmark locations may be identified on an image of a human face and used for emotion recognition, face identity verification, eye gaze tracking, pose estimation, etc. A transform is applied to input image data to produce transformed input image data. The transform is also applied to predicted coordinates for landmarks of the input image data to produce transformed predicted coordinates. A neural network model processes the transformed input image data to generate additional landmarks of the transformed input image data and additional predicted coordinates for each one of the additional landmarks. Parameters of the neural network model are updated to reduce differences between the transformed predicted coordinates and the additional predicted coordinates.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: September 22, 2020
    Assignee: NVIDIA Corporation
    Inventors: Pavlo Molchanov, Stephen Walter Tyree, Jan Kautz, Sina Honari
  • Patent number: 10769485
    Abstract: A framebuffer-less system of convolutional neural network (CNN) includes a region of interest (ROI) unit that extracts features, according to which a region of interest in an input image frame is generated; a convolutional neural network (CNN) unit that processes the region of interest of the input image frame to detect an object; and a tracking unit that compares the features extracted at different times, according to which the CNN unit selectively processes the input image frame.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: September 8, 2020
    Assignee: Himax Technologies Limited
    Inventor: Der-Wei Yang
  • Patent number: 10762385
    Abstract: In a computer-implemented method and associated tangible non-transitory computer-readable medium, an image of a damaged vehicle may be analyzed to generate a repair estimate. A dataset populated with digital images of damaged vehicles and associated claim data may be used to train a deep learning neural network to learn damaged vehicle image characteristics that are predictive of claim data characteristics, and a predictive similarity model may be generated. Using the predictive similarity model, one or more similarity scores may be generated for a digital image of a newly damaged vehicle, indicating its similarity to one or more digital images of damaged vehicles with known damage level, repair time, and/or repair cost. A repair estimate may be generated for the newly damaged vehicle based on the claim data associated with images that are most similar to the image of the newly damaged vehicle.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: September 1, 2020
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: He Yang, Bradley A. Sliz, Carlee A. Clymer, Jennifer Malia Andrus
  • Patent number: 10762603
    Abstract: Systems and methods for image noise reduction are provided. The methods may include obtaining first image data, determining a restriction or a gradient of the first image data, determining a regularization parameter for the first image data based on the restriction or the gradient, generating second image data based on the regularization parameter and the first image data, and generating a regularized image based on the second image data.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: September 1, 2020
    Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.
    Inventors: Stanislav Zabic, Zhicong Yu
  • Patent number: 10755133
    Abstract: A system and method for identifying line Mura defects on a display. The system is configured to generate a filtered image by preprocessing an input image of a display using at least one filter. The system then identifies line Mura candidates by converting the filtered image to a binary image, counting line components along a slope in the binary image, and marking a potential candidate location when the line components along the slope exceed a line threshold. Image patches are then generated with the candidate locations at the center of each image patch. The image patches are then classified using a machine learning classifier.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: August 25, 2020
    Assignee: Samsung Display Co., Ltd.
    Inventor: Janghwan Lee
  • Patent number: 10754139
    Abstract: A three-dimensional position information acquiring method includes acquiring an image of a first optical image; thereafter acquiring an image of a second optical image; and performing a computation using image data of the first and second optical images. Acquisition of the image of the first optical image is based on light beams having passed through a first area. Acquisition of the image of the second optical image is based on light beams having passed through a second area. The positions of the centers of the first and second areas are both away from the optical axis of an optical system in a plane perpendicular to said optical axis. The first and second areas respectively include at least portions that do not overlap with each other. Three-dimensional position information about an observed object is acquired by the computation. The first and second areas are formed at rotationally symmetric positions.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: August 25, 2020
    Assignee: OLYMPUS CORPORATION
    Inventor: Hiroshi Ishiwata