Patents Examined by Jonathan S Lee
  • Patent number: 11372069
    Abstract: A method for target identification for a deep brain stimulation procedure includes acquiring a set of magnetic resonance fingerprinting (MRF) data for a region of interest in a subject using a MRI system, comparing the set of MRF data to an MRF dictionary to determine at least one parameter for the MRF data for the region of interest, generating a quantitative map of the at least one parameter, segmenting a target area of the region of interest based on the MRF data, generating at least one trajectory for placement of at least one electrode in the target area of the region of interest based on the segmentation of the target area and displaying the quantitative map and the at least one trajectory on a display.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: June 28, 2022
    Assignee: Case Western Reserve University
    Inventors: Mark A. Griswold, Cameron McIntyre
  • Patent number: 11354900
    Abstract: Techniques are described that classify content, and control whether and how the content is shared based on the classification(s). In some examples, video content may be classified based on sequential image frames of the video, and time between the sequential image frames. Audio content may be classified based on combining classifications of multiple sound events in the audio signal. The classifications may be used to control how the content is shared, such as by preventing offensive content from being shared and/or outputting recommendations or search results based on the classifications.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: June 7, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Shangwen Li, Jiantao Wu
  • Patent number: 11354785
    Abstract: An image processing method and device, storage medium and electronic device for deblurring an image. The method includes obtaining an image processing instruction including an instruction to deblur a target blurred image; obtaining a target model by training an original model based on a plurality of sample images of different scales, one of the plurality of sample images being a blurred image composed of a plurality of clear images, and the obtained target model being used for deblurring the blurred image to obtain a clear image; based on the image processing instruction, using the target model to deblur the target blurred image to obtain a target clear image; and outputting the target clear image.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: June 7, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD
    Inventors: Xin Tao, Hong Yun Gao, Xiao Yong Shen, Yu Wing Tai, Jia Ya Jia
  • Patent number: 11348219
    Abstract: A process for the automatic evaluation of the quality of digital photographs includes software programmed to perform the following steps and to perform them through such software:—converting the photograph into greyscale and calculating the intensity diagram of the converted photograph;—identifying a predetermined initial intensity interval and final intensity interval of the diagram;—if in the initial interval and/or the final interval there is a total percentage of pixels of the photograph greater than or “greater than or equal to” a predetermined threshold, the contrast is evaluated;—if the contrast of the photograph converted to greyscale is < or <= a predetermined threshold, the photograph is rejected.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: May 31, 2022
    Assignee: PhotoSi S.p.A. Unipersonale
    Inventor: Mainetti Andrea
  • Patent number: 11341612
    Abstract: A method for automatic correction and generation of facial images includes storing, by a computer system, multiple constraints specifying an image quality. The computer system receives a first image of a face of a user from a computer device. The computer system determines that the first image of the face of the user violates at least one constraint of the multiple constraints specifying the image quality. Responsive to the determining that the first image of the face of the user violates the at least one constraint, the computer system generates a second image of the face of the user based on the first image of the face of the user and the at least one constraint, such that the second image of the face of the user satisfies the at least one constraint. A display device of the computer system displays the second image of the face of the user.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: May 24, 2022
    Assignee: Idemia Identity & Security USA LLC
    Inventor: Brian Martin
  • Patent number: 11335033
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing deep learning to intelligently determine compression settings for compressing a digital image. For instance, the disclosed system utilizes a neural network to generate predicted perceptual quality values for compression settings on a compression quality scale. The disclosed system fits the predicted compression distortions to a perceptual distortion characteristic curve for interpolating predicted perceptual quality values across the compression settings on the compression quality scale. Additionally, the disclosed system then performs a search over the predicted perceptual quality values for the compression settings along the compression quality scale to select a compression setting based on a perceptual quality threshold. The disclosed system generates a compressed digital image according to compression parameters for the selected compression setting.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: May 17, 2022
    Assignee: Adobe Inc.
    Inventors: Meet Patel, Mayur Hemani, Karanjeet Singh, Amit Gupta, Apoorva Gupta, Balaji Krishnamurthy
  • Patent number: 11328402
    Abstract: The present invention provides an anomaly detection method and apparatus based on a neural network which can be trained on undamaged normal vehicle images and able to detect unknown/unseen vehicle damages of stochastic types and extents from images which are taken in various contexts. The provided method and apparatus are implemented with functional units which are trained to perform the anomaly detection under a GCAN model with a training dataset containing images of undamaged vehicles, intact-vehicle frame images and augmented vehicle frame images of the vehicles.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: May 10, 2022
    Assignee: Hong Kong Applied Science and Technology Research Institute Company Limited
    Inventors: Wai Kai Arvin Tang, Pak Kan Wong
  • Patent number: 11328535
    Abstract: The present invention provides an action recognition method and system thereof. The action recognition method comprises: capturing a 2D image and a depth image at the same time, extracting an 2D information of the human skeleton points from the 2D image and correcting it, mapping the 2D information of the human skeleton points to the depth image to obtain the corresponding depth information with respect to the 2D information of the human skeleton points and combining the corrected 2D information of the human skeleton points and the depth information to obtain the 3D information of the human skeleton points, and finally recognizing an action from a set of 3D information of the human skeleton points during a period of time by a matching model.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: May 10, 2022
    Assignee: IONETWORKS INC.
    Inventors: Jing-Ming Guo, Po-Cheng Huang, Ting Lin, Chih-Hung Wang, Yu-Wen Wei, Yi-Hsiang Lin
  • Patent number: 11328519
    Abstract: Machine-learning models are described detecting the signaling state of a traffic signaling unit. A system can obtain an image of the traffic signaling unit, and select a model of the traffic signaling unit that identifies a position of each traffic lighting element on the unit. First and second neural network inputs are processed with a neural network to generate an estimated signaling state of the traffic signaling unit. The first neural network input can represent the image of the traffic signaling unit, and the second neural network input can represent the model of the traffic signaling unit. Using the estimated signaling state of the traffic signaling unit, the system can inform a driving decision of a vehicle.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: May 10, 2022
    Assignee: Waymo LLC
    Inventors: Edward Hsiao, Yu Ouyang, Maoqing Yao
  • Patent number: 11321573
    Abstract: The present disclosure is directed to an autonomous vehicle having a vehicle control system. The vehicle control system includes an image processing system. The image processing system receives an image that includes a plurality of image portions. The image processing system also calculates a score for each image portion. The score indicates a level of confidence that a given image portion represents an illuminated component of a traffic light. The image processing system further identifies one or more candidate portions from among the plurality of image portions. Additionally, the image processing system determines that a particular candidate portion represents an illuminated component of a traffic light using a classifier. Further, the image processing system provides instructions to control the autonomous vehicle based on the particular candidate portion representing an illuminated component of a traffic light.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: May 3, 2022
    Assignee: Waymo LLC
    Inventors: Andreas Wendel, David Ian Franklin Ferguson
  • Patent number: 11301977
    Abstract: An image inspection computing device is provided. The device includes a memory device and at least one processor. The at least one processor is configured to receive at least one sample image of a first component, wherein the at least one sample image of the first component does not include defects, store, in the memory, the at least one sample image, and receive an input image of a second component. The at least one processor is also configured to generate an encoded array based on the input image of the second component, perform a stochastic data sampling process on the encoded array, generate a decoded array, and generate a reconstructed image of the second component, derived from the stochastic data sampling process and the decoded array. The at least one processor is further configured to produce a residual image, and identify defects in the second component.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: April 12, 2022
    Assignee: General Electric Company
    Inventors: Alberto Santamaria-Pang, Yousef Al-Kofahi, Aritra Chowdhury, Shourya Sarcar, Michael John MacDonald, Peter Arjan Wassenaar, Patrick Joseph Howard, Bruce Courtney Amm, Eric Seth Moderbacher
  • Patent number: 11301965
    Abstract: The disclosure provides methods and image processing devices for image super resolution, image enhancement, and convolutional neural network (CNN) model training. The method for image super resolution includes the following steps. An original image is received, and a feature map is extracted from the original image. The original image is segmented into original patches. Each of the original patches is classified respectively into one of patch clusters according to the feature map. The original patches are processed respectively by different pre-trained CNN models according to the belonging patch clusters to obtain predicted patches. A predicted image is generated based on the predicted patches.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: April 12, 2022
    Assignee: Novatek Microelectronics Corp.
    Inventor: Yu Bai
  • Patent number: 11294952
    Abstract: Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: April 5, 2022
    Assignee: CaaStle, Inc.
    Inventors: Yu-Cheng Tsai, Dongming Jiang, Georgiy Goldenberg
  • Patent number: 11295170
    Abstract: A system including one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to implement a group-equivariant convolutional neural network configured to process a network input including a set of 3D points to generate an output tensor representing transformed features of the network input is described. The group-equivariant convolutional neural network includes a grouping layer, a switching layer, a pre-processing layer, a group-equivariant layer, a first subnetwork, an average pooling layer, a second subnetwork, an output subnetwork, and a final output layer.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: April 5, 2022
    Assignee: FPT USA Corp.
    Inventors: Thieu Ngoc Vo, Phong Xuan Nguyen, Nghia Trung Huynh, Binh Cong Phan, Tuan Anh Le
  • Patent number: 11288522
    Abstract: The present invention relates to a method of generating an overhead view image of an area. More particularly, the present invention relates to a method of generating a contextual multi-image based overhead view image of an area using ground map data and field of view image data. Various embodiments of the present technology can include methods, systems and non-transitory computer readable media and computer programs configured to determine a ground map of the geographical area; receiving a plurality of images of the geographical area; process the plurality of images to select a subset of images to generate the overhead view of the geographical area; divide the ground map into a plurality of sampling points of the geographical area; and determine a color of a plurality of patches of the overhead view image from the subset of images, each patch representing each sampling point of the geographical area.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: March 29, 2022
    Assignee: Woven Planet North America, Inc.
    Inventors: Clemens Marschner, Wilhelm Richert, Thomas Schiwietz, Darko Zikic
  • Patent number: 11278357
    Abstract: Certain aspects relate to systems and techniques for navigation-assisted medical devices. Some aspects relate to correlating features of depth information generated based on captured images of an anatomical luminal network with virtual features of depth information generated based on virtual images of a virtual representation of the anatomical luminal network in order to automatically determine aspects of a roll of a medical device within the luminal network.
    Type: Grant
    Filed: December 20, 2018
    Date of Patent: March 22, 2022
    Assignee: Auris Health, Inc.
    Inventor: Ritwik Ummalaneni
  • Patent number: 11282178
    Abstract: Disclosed herein are an electronic device and an operating method thereof. The electronic device is for identifying an interested false image attributable to reflection in an indoor environment, and may be configured to obtain a color image and a depth image, detect an interested area indicative of at least one object in the color image, detect at least one reference surface in the depth image, and compare an interested depth of the interested area, detected based on the depth image, with a reference depth of the reference surface and process the interested area.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: March 22, 2022
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Yong-Hwa Park, Daehee Park, Simeneh Semegne Gulelat
  • Patent number: 11275926
    Abstract: Disclosed is face tracking method and device. The method includes: acquiring an initial facial image in a to-be-tracked picture; performing binarization processing on the initial facial image according to a standard range of color parameter and an actual value of the color parameter of each pixel in the initial facial image, to obtain a binarized facial image; acquiring a position of a preset organ in the binarized facial image; and acquiring a position of a final facial image according to the position of the preset organ and a position of the initial facial image.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: March 15, 2022
    Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Minglei Chu, Hao Zhang, Lili Chen, Jian Sun, Ziqiang Guo, Guixin Yan, Jiankang Sun, Xi Li, Yinwei Chen, Ruifeng Qin, Yuanjie Lu
  • Patent number: 11276156
    Abstract: A camera image cleaning system of an automobile vehicle includes a camera generating a camera image of a vehicle environment. A processor having a memory executes a control logic to convert the camera image into a grayscale image having multiple image pixels. A convolution equation is retrieved from the memory and is solved to find derivations of the grayscale image defining changes of pixel intensity between consecutive or neighbor ones of the multiple image pixels of the grayscale image. A magnitude and an orientation of the multiple pixels is computed by the processor and used to differentiate weak ones of the image pixels from strong ones of the image pixels.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: March 15, 2022
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Keyu Wang, Sanket Yadav
  • Patent number: 11263747
    Abstract: An example method includes generating, using a multi-scale block of a convolutional neural network (CNN), a first output image based on an optical coherence tomography (OCT) reflectance image of a retina and an OCT angiography (OCTA) image of the retina. The method further includes generating, using an encoder of the CNN, at least one second output image based on the first output image and generating, using a decoder of the CNN, a third output image based on the at least one second output image. An avascular map is generated based on the third output image. The avascular map indicates at least one avascular area of the retina depicted in the OCTA image.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: March 1, 2022
    Assignee: Oregon Health & Science University
    Inventors: Yali Jia, Yukun Guo