Patents Examined by Phuoc Tran
  • Patent number: 11568524
    Abstract: Techniques are disclosed for changing the identities of faces in images. In embodiments, a tunable model for changing facial identities in images includes an encoder, a decoder, and dense layers that generate either adaptive instance normalization (AdaIN) coefficients that control the operation of convolution layers in the decoder or the values of weights within such convolution layers, allowing the model to change the identity of a face in an image based on a user selection. A separate set of dense layers may be trained to generate AdaIN coefficients for each of a number of facial identities, and the AdaIN coefficients output by different sets of dense layers can be combined to interpolate between facial identities. Alternatively, a single set of dense layers may be trained to take as input an identity vector and output AdaIN coefficients or values of weighs within convolution layers of the decoder.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: January 31, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Leonard Markus Helminger, Jacek Krzysztof Naruniec, Romann Matthew Weber, Christopher Richard Schroers
  • Patent number: 11557114
    Abstract: Disclosed are methods, systems, and non-transitory computer-readable medium for color and pattern analysis of images including wearable items. For example, a method may include receiving an image depicting a wearable item, identifying the wearable item within the image by identifying a face of an individual wearing the wearable item or segmenting a foreground silhouette of the wearable item from background image portions of the image, determining a portion of the wearable item identified within the image as being a patch portion representative of the wearable item depicted within the image, deriving one or more patterns of the wearable item based on image analysis of the determined patch portion of the image, deriving one or more colors of the wearable item based on image analysis of the determined patch portion of the image, and transmitting information regarding the derived one or more colors and information regarding the derived one or more patterns.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: January 17, 2023
    Assignee: CaaStle, Inc.
    Inventor: Steven Sesshu Shimozaki
  • Patent number: 11545237
    Abstract: A classification training method for training classifiers adapted to identify specific mutations associated with different cancer including identifying driver mutations. First cells from mutation cell lines derived from conditions having the number of driver mutations are acquired and 3D image feature data from the number of first cells is identified. 3D cell imaging data from the number of first cells and from other malignant cells is generated, where cell imaging data includes a number of first individual cell images. A second set of 3D cell imaging data is generated from a set of normal cells where the number of driver mutations are expected to occur, where the second set of cell imaging data includes second individual cell images. Supervised learning is conducted based on cell line status as ground truth to generate a classifier.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: January 3, 2023
    Assignee: VISIONGATE, INC.
    Inventors: Michael G. Meyer, Daniel J. Sussman, Rahul Katdare, Laimonas Kelbauskas, Alan C. Nelson, Randall Mastrangelo
  • Patent number: 11544823
    Abstract: Systems and methods for tone mapping of high dynamic range (HDR) images for high-quality deep learning based processing are disclosed. In one embodiment, a graphics processor includes a media pipeline to generate media requests for processing images and an execution unit to receive media requests from the media pipeline. The execution unit is configured to compute an auto-exposure scale for an image to effectively tone map the image, to scale the image with the computed auto-exposure scale, and to apply a tone mapping operator including a log function to the image and scaling the log function to generate a tone mapped image.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: January 3, 2023
    Assignee: Intel Corporation
    Inventor: Attila Tamas Afra
  • Patent number: 11537822
    Abstract: A method for classifying images of oligolayer exfoliation attempts. In some embodiments, the method includes forming a micrograph of a surface, and classifying the micrograph into one of a plurality of categories. The categories may include a first category, consisting of micrographs including at least one oligolayer flake, and a second category, consisting of micrographs including no oligolayer flakes, the classifying comprising classifying the micrograph with a neural network.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: December 27, 2022
    Assignee: RAYTHEON BBN TECHNOLOGIES CORP.
    Inventors: Kin Chung Fong, Man-Hung Siu, Zhuolin Jiang
  • Patent number: 11532179
    Abstract: Methods, systems, and computer readable storage media for using image processing to develop a library of facial expressions. The system can receive digital video of at least one speaker, then execute image processing on the video to identify landmarks within facial features of the speaker. The system can also identify vectors based on the landmarks, then assign each vector to an expression, resulting in a plurality of speaker expressions. The system then scores the expressions based on similarity to one another, and creates subsets based on the similarity scores.
    Type: Grant
    Filed: June 3, 2022
    Date of Patent: December 20, 2022
    Assignee: PROF JIM INC.
    Inventors: Gandham Venkata Sai Anooj, Maria Walley, Deepak Chandra Sekar
  • Patent number: 11526967
    Abstract: An inpainting method includes retrieving image information at an electronic device, where the image information identifies an area within an image. The method also includes retrieving, using the electronic device, semantic information including a plurality of semantic classes and a semantic class distribution for each semantic class of the plurality of semantic classes. The method further includes generating semantic codes associated with different portions of the image based on the image information and the semantic information. In addition, the method includes constructing the area within the image by generating image content based on the semantic information.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: December 13, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Wenbo Li, Hongxia Jin
  • Patent number: 11517099
    Abstract: A method for processing images includes: detecting a plurality of human face key points of a three-dimensional human face in a target image; acquiring a virtual makeup image, wherein the virtual makeup image includes a plurality of reference key points, the reference key points indicating human face key points of a two-dimensional human face; and acquiring a target image fused with the virtual makeup image by fusing the virtual makeup image and the target image with each of the reference key points in the virtual makeup image aligned with a corresponding human face key point.
    Type: Grant
    Filed: November 20, 2020
    Date of Patent: December 6, 2022
    Assignee: Beijing Dajia Internet Information Technology Co., Ltd.
    Inventors: Shanshan Wu, Paliwan Pahaerding, Bo Wang
  • Patent number: 11509339
    Abstract: A method of processing a radio frequency signal includes: receiving the radio frequency signal at an antenna of a receiver device; processing, by a radio frequency front-end device, the radio frequency signal; converting, by an analog-to-digital converter, the analog signal to a digital signal; receiving, by a neural network, the digital signal; and processing, by the neural network, the digital signal to produce an output.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: November 22, 2022
  • Patent number: 11508045
    Abstract: In a computer-implemented method for generating an image processing model that generates output data defining a stylized contrast image from a microscope image, model parameters of the image processing model are adjusted by optimizing at least one objective function using training data. The training data comprises microscope images as input data and contrast images, wherein the microscope images and the contrast images are generated by different microscopy techniques. In order for the output data to define a stylized contrast image, the objective function forces a detail reduction or the contrast images are detail-reduced contrast images with a level of detail that is lower than in the microscope images and higher than in binary images.
    Type: Grant
    Filed: May 25, 2022
    Date of Patent: November 22, 2022
    Assignee: Carl Zeiss Microscopy GmbH
    Inventors: Manuel Amthor, Daniel Haase, Alexander Freytag, Christian Kungel
  • Patent number: 11501165
    Abstract: Embodiments relate to a system, program product, and method for training a contrastive neural network (CNN) in an active learning environment. A neural network is pre-trained with labeled data of a historical (first) dataset. The CNN is trained for a new (second) dataset by applying the new dataset and contrasting the new dataset against the historical dataset to extract novel patterns. Weights of a knowledge operator from the pre-trained neural network are borrowed. Features novel to the new dataset are learned, including updating weights of the knowledge operator. The borrowed knowledge operator weights are combined with the updated knowledge operator weights. The CNN is leveraged to predict one or more labels for the new dataset as output data.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: November 15, 2022
    Assignee: International Business Machines Corporation
    Inventors: Chen Lin, Hongtan Sun, John Rofrano, Maja Vukovic
  • Patent number: 11488293
    Abstract: A method for processing images is provided. The method includes: acquiring a first image by smoothing a skin region of a target object in an original image; determining a skin texture material matching with a face area of the target object; acquiring a facial texture image of the target object by rendering the skin texture material; and acquiring a second image by fusing the facial texture image with the first image.
    Type: Grant
    Filed: January 4, 2022
    Date of Patent: November 1, 2022
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Sainan Guo, Di Yang
  • Patent number: 11488418
    Abstract: Estimating a three-dimensional (3D) pose of an object, such as a hand or body (human, animal, robot, etc.), from a 2D image is necessary for human-computer interaction. A hand pose can be represented by a set of points in 3D space, called keypoints. Two coordinates (x,y) represent spatial displacement and a third coordinate represents a depth of every point with respect to the camera. A monocular camera is used to capture an image of the 3D pose, but does not capture depth information. A neural network architecture is configured to generate a depth value for each keypoint in the captured image, even when portions of the pose are occluded, or the orientation of the object is ambiguous. Generation of the depth values enables estimation of the 3D pose of the object.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: November 1, 2022
    Assignee: NVIDIA Corporation
    Inventors: Umar Iqbal, Pavlo Molchanov, Thomas Michael Breuel, Jan Kautz
  • Patent number: 11481570
    Abstract: In some embodiments, a method receives a first textual description of content and converts the first textual description of content to a first image representation. The method compares a similarity between the first image representation and a second image representation for candidate metadata. The candidate metadata is associated with a second textual description of content. The method determines whether the first textual description of content is associated with the second textual description of content based on the comparison of similarity of the first image representation and the second image representation.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: October 25, 2022
    Assignee: HULU, LLC
    Inventor: Aninoy Mahapatra
  • Patent number: 11482041
    Abstract: Methods, apparatus, and systems are provided for obfuscating facial identity in images by synthesizing a new facial image for an input image. A base face is detected from or selected for an input image. Facial images that are similar to the base face are selected and combined to create a new facial image. The new facial image is added to the input image such that the input image includes a combination of the base face and the new facial image. Where no base face is detected in the input image, a base face is selected from reference facial images based at least on pose keypoints identified in the input image. After a new facial image is generated based on the selected base face, a combination of the new facial image and the base facial image are added to the input image by aligning one or more pose keypoints.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: October 25, 2022
    Assignee: ADOBE INC.
    Inventors: Sanjeev Tagra, Sachin Soni, Ajay Jain, Ryan Rozich, Jonathan Roeder, Arshiya Aggarwal, Prasenjit Mondal
  • Patent number: 11475361
    Abstract: The present disclosure relates to computer-implemented methods, software, and systems for utilizing tools and techniques for identifying process rules for automated execution of instances of a process workflow. One example method includes extracting rules from a machine learning model for prediction of execution results of process workflow instances. Metrics defining coverage and accuracy of the rules are calculated. The rules are evaluated according to the metrics and are reduced to a first set of rules that are provided for further evaluation. A rule from the first set of rules is determined to be incorporated into process rules defined for the process workflow at a process execution engine. The process rules associated with execution of the process workflow are updated to include the first rule and to generate a process result automatically according to the first rule when the instance complies with prerequisites defined at the first rule.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: October 18, 2022
    Assignee: SAP SE
    Inventor: Dennis Lehr
  • Patent number: 11475707
    Abstract: The present disclosure provides a method for extracting a face detection image, wherein the method includes: obtaining a plurality of image frames by an image detector, performing a face detection process on each image frame to extract a face area, performing a clarity analysis on the face area of each image frame to obtain a clarity degree of a face, conducting a posture analysis on the face area of each image frame to obtain a face posture angle, generating a comprehensive evaluation index for each image frame in accordance with the clarity degree of the face and the face posture angle of each image frame, and selecting a key frame from the image frames based on the comprehensive evaluation index. Such that the resource occupancy rate during image data processing may be reduced, and the quality of the face detection process may be improved.
    Type: Grant
    Filed: December 27, 2020
    Date of Patent: October 18, 2022
    Assignee: UBTECH ROBOTICS CORP LTD
    Inventors: Yusheng Zeng, Jianxin Pang, Youjun Xiong
  • Patent number: 11465294
    Abstract: The present invention provides an artificial intelligence includes a memory configured to store a plurality of pollution logs; a learning processor configured to classify the plurality of pollution logs into at least one pollution log group based on a similarity between pollution information; a map generator configured to generate an indoor area map to which location of each of at least one pollution log group is mapped; and a processor which determines a cleaning method for each of the at least one pollution log group and performs cleaning for each of the at least one pollution log group according to the determined cleaning method.
    Type: Grant
    Filed: September 26, 2019
    Date of Patent: October 11, 2022
    Assignee: LG ELECTRONICS INC.
    Inventor: Jonghoon Chae
  • Patent number: 11461879
    Abstract: An image processing method converts an original image represented in an original color gamut to a target image represented in a target color gamut. The image processing method comprises: A) calculating a set of primary color direction deviations between a set of original primary color directions of the original color gamut and a set of target primary color directions of the target color gamut, wherein each of the set of primary color direction deviations corresponds to a primary color; B) determining, for each pixel in the original image, a corrected color coordinate of the pixel based on a set of offsets of an original color coordinate of the pixel in the original color gamut relative to the set of original primary color directions and the set of primary color direction deviations; and C) mapping, for each pixel in the original image, the corrected color coordinate of the pixel into the target color gamut to generate a target color coordinate of the pixel in the target color gamut.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: October 4, 2022
    Assignee: MONTAGE LZ TECHNOLOGIES (CHENGDU) CO., LTD.
    Inventors: Chengqiang Liu, Zhimin Qiu, ChiaChen Chang
  • Patent number: 11455812
    Abstract: An approach for extracting non-textual data from an electronic document is disclosed. The approach includes receiving a request to extract a file and converting the file into pixels. The approach creates a pixel map of the converted file and determines one or more density clusters of the pixel map based on image clustering method. Furthermore, the approach determines one or more coordinates of the one or more density clusters and determines one or more candidate information regions based on the one or more coordinates, density of the one or more density clusters. Finally, the approach extracts one or more textual data based on the one or more candidate information regions and outputs the extracted one or more textual data.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: September 27, 2022
    Assignee: International Business Machines Corporation
    Inventors: Zhong Fang Yuan, Guang Qing Zhong, Tong Liu, De Shuo Kong, Yi Ming Wang