Neural Networks Patents (Class 382/156)
  • Patent number: 11170257
    Abstract: Techniques for training a machine-learning (ML) model for captioning images are disclosed. A plurality of feature vectors and a plurality of visual attention maps are generated by a visual model of the ML model based on an input image. Each of the plurality of feature vectors correspond to different regions of the input image. A plurality of caption attention maps are generated by an attention model of the ML model based on the plurality of feature vectors. An attention penalty is calculated based on a comparison between the caption attention maps and the visual attention maps. A loss function is calculated based on the attention penalty. One or both of the visual model and the attention model are trained using the loss function.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: November 9, 2021
    Assignee: ANCESTRY.COM OPERATIONS INC.
    Inventors: Jiayun Li, Mohammad K. Ebrahimpour, Azadeh Moghtaderi, Yen-Yun Yu
  • Patent number: 11170265
    Abstract: An image processing method for recognising characters included in an image. A first character recognition unit performs recognition of a first group of characters corresponding to a first region of the image. A measuring unit calculates a confidence measure of the first group of characters. A determination unit determines whether further recognition is to be performed based on the confidence measure. A selection unit selects a second region of the image that includes the first region, if it is determined that further recognition is to be performed. A second character recognition unit performs further recognition of a second group of characters corresponding to the second region of the image.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: November 9, 2021
    Assignee: I.R.I.S.
    Inventors: Frédéric Collet, Jordi Hautot, Michel Dauw
  • Patent number: 11154196
    Abstract: A deep machine-learning approach is used for medical image fusion by a medical imaging system. This one approach may be used for different applications. For a given application, the same deep learning is used but with different application-specific training data. The resulting deep-learnt classifier provides a reduced feature vector in response to input of intensities of one image and displacement vectors for patches of the one image relative to another image. The output feature vector is used to determine the deformation for medical image fusion.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: October 26, 2021
    Assignee: Siemens Healthcare GmbH
    Inventor: Li Zhang
  • Patent number: 11158059
    Abstract: Edge-Loss-based image construction is enabled by a method including generating a reconstructed image from a first edge image with a generator, extracting a second edge image from the reconstructed image with an edge extractor, smoothing the first edge image and the second edge image, discriminating between the reconstructed image and an original image corresponding to the first edge image with a discriminator to obtain an adversarial loss, and training the generator by using an edge loss and the adversarial loss, the edge loss being calculated from the smoothed first edge image and the smoothed second edge image.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: October 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jason Marc Plawinski, Daiki Kimura, Tristan Matthieu Stampfler, Subhajit Chaudhury, Asim Munawar
  • Patent number: 11138424
    Abstract: Disclosed herein are system, method, and computer program product embodiments for analyzing contextual symbol information for document processing. In an embodiment, a language model system may generate a vector grid that incorporates contextual document information. The language model system may receive a document file and identify symbols of the document file to generate a symbol grid. The language model system may also identify position parameters corresponding to each of the symbols. The language model system may then analyze the symbols using an embedding function and neighboring symbols to determine contextual vector values corresponding to each of the symbols. The language model system may then generate a vector grid mapping the contextual vector values using the position parameters. The contextual information from the vector grid may provide increase document processing accuracy as well as faster processing convergence.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: October 5, 2021
    Assignee: SAP SE
    Inventors: Timo Denk, Christian Reisswig
  • Patent number: 11138452
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate two or more stereo pairs of synthetic images and generate two or more stereo pairs of real images based on the two or more stereo pairs of synthetic images using a generative adversarial network (GAN), wherein the GAN is trained using a six-axis degree of freedom (DoF) pose determined based on the two or more pairs of real images. The instructions can further include instructions to train a deep neural network based on a sequence of real images and operate a vehicle using the deep neural network to process a sequence of video images acquired by a vehicle sensor.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: October 5, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Punarjay Chakravarty, Praveen Narayanan, Nikita Jaipuria, Gaurav Pandey
  • Patent number: 11126888
    Abstract: An object recognition method and apparatus for a deformed image are provided. The method includes: inputting an image into a preset localization network to obtain a plurality of localization parameters for the image, wherein the preset localization network comprises a preset number of convolutional layers, and wherein the plurality of localization parameters are obtained by regressing image features in a feature map that is generated from a convolution operation on the image; performing a spatial transformation on the image based on the plurality of localization parameters to obtain a corrected image; and inputting the corrected image into a preset recognition network to obtain an object classification result for the image. In the process of the neural network based object recognition, the embodiment of the present application first transforms the deformed image that has deformation, and then performs the object recognition on the transformed image.
    Type: Grant
    Filed: June 12, 2018
    Date of Patent: September 21, 2021
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD
    Inventors: Yunlu Xu, Gang Zheng, Zhanzhan Cheng, Yi Niu
  • Patent number: 11126895
    Abstract: Methods and systems are provided to generate an uncorrupted version of an image given an observed image that is a corrupted version of the image. In some embodiments, a corruption mimicking (“CM”) system iteratively trains a corruption mimicking network (“CMN”) to generate corrupted images given modeled images, updates latent vectors based on differences between the corrupted images and observed images, and applies a generator to the latent vectors to generate modeled images. The training, updating, and applying are performed until modeled images that are input to the CMN result in corrupted images that approximate the observed images. Because the CMN is trained to mimic the corruption of the observed images, the final modeled images represented the uncorrupted version of the observed images.
    Type: Grant
    Filed: April 4, 2020
    Date of Patent: September 21, 2021
    Assignee: Lawrence Livermore National Security, LLC
    Inventors: Rushil Anirudh, Peer-Timo Bremer, Jayaraman Jayaraman Thiagarajan, Bhavya Kailkhura
  • Patent number: 11107194
    Abstract: A neural network is provided. The neural network includes 2n number of sampling units sequentially connected; and a plurality of processing units. A respective one of the plurality of processing units is between two adjacent sampling units of the 2n number of sampling units. A first sampling unit to an n-th sample unit of the 2n number of sampling units are DeMux units. A respective one of the DeMux units is configured to rearrange pixels in a respective input image to the respective one of the DeMux units following a first scrambling rule to obtain a respective rearranged image. An (n+1)-th sample unit to a (2n)-th sample unit of the 2n number of sampling units are Mux units. A respective one of the Mux units is configured to combing respective m? number of input images to the respective one of the Mux units to obtain a respective combined image.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: August 31, 2021
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Hanwen Liu, Pablo Navarrete Michelini, Dan Zhu, Lijie Zhang
  • Patent number: 11106944
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that can initially train a machine-learning-logo classifier using synthetic training images and incrementally apply the machine-learning-logo classifier to identify logo images to replace the synthetic training images as training data. By incrementally applying the machine-learning-logo classifier to determine one or both of logo scores and positions for logos within candidate logo images, the disclosed systems can select logo images and corresponding annotations indicating positions for ground-truth logos. In some embodiments, the disclosed systems can further augment the iterative training of a machine-learning-logo classifier to include user curation and removal of incorrectly detected logos from candidate images, thereby avoiding the risk of model drift across training iterations.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: August 31, 2021
    Assignee: Adobe Inc.
    Inventors: Viswanathan Swaminathan, Saayan Mitra, Han Guo
  • Patent number: 11093798
    Abstract: A method includes receiving a user object specified by a user. A similarity score is computed using a similarity function between the user object and one or more candidate objects in a database based on respective feature vectors. A first subset of the one or more candidate objects is presented to the user based on the respective computed similarity scores. First feedback is received from the user about the first subset of candidate objects. The similarity function is adjusted based on the received first feedback.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: August 17, 2021
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Francisco E. Torres, Hoda Eldardiry, Matthew Shreve, Gaurang Gavai, Chad Ramos
  • Patent number: 11087488
    Abstract: Disclosed are methods, apparatus and systems for gesture recognition based on neural network processing. One exemplary method for identifying a gesture communicated by a subject includes receiving a plurality of images associated with the gesture, providing the plurality of images to a first 3-dimensional convolutional neural network (3D CNN) and a second 3D CNN, where the first 3D CNN is operable to produce motion information, where the second 3D CNN is operable to produce pose and color information, and where the first 3D CNN is operable to implement an optical flow algorithm to detect the gesture, fusing the motion information and the pose and color information to produce an identification of the gesture, and determining whether the identification corresponds to a singular gesture across the plurality of images using a recurrent neural network that comprises one or more long short-term memory units.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: August 10, 2021
    Assignee: AVODAH, INC.
    Inventors: Trevor Chandler, Dallas Nash, Michael Menefee
  • Patent number: 11087144
    Abstract: The present disclosure relates to systems, devices and methods for identifying objects and scenarios that have not been trained or are unidentifiable to vehicle perception sensors or vehicle assistive driving systems. Embodiments are directed to using a trained vehicle data set to identify target objects in vehicle sensor data. In one embodiment, a process is provided that includes running a scene detection operation on vehicle to derive a vector of target object attributes of the vehicle sensor data and generating a vector representation for the scene detection operation and the attributes of the vehicle sensor data. The vector representation compared to a familiarity vector to represent effectiveness of the scene detection operation. In addition, the vector representation can be scored to identify one or more target objects or significant scenarios, including unidentifiable objects and/or driving scenes, scenarios for reporting.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: August 10, 2021
    Assignee: Harman International Industries, Incorporated
    Inventors: Aaron Thompson, Honghao Tan
  • Patent number: 11080562
    Abstract: A method includes obtaining training samples that include images that depict objects and annotations of annotated key point locations for the objects. The method also includes training a machine learning model to determine estimated key point locations for the objects and key point uncertainty values for the estimated key point locations by minimizing a loss function that is based in part on a key point localization loss value that represents a difference between the annotated key point locations and the estimated key point locations values and is weighted by the key point uncertainty values.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: August 3, 2021
    Assignee: Apple Inc.
    Inventors: Shreyas Saxena, Wenda Wang, Guanhang Wu, Nitish Srivastava, Dimitrios Kottas, Cuneyt Oncel Tuzel, Luciano Spinello, Ricardo da Silveira Cabral
  • Patent number: 11055841
    Abstract: Systems and methods for determining the quality of concrete from construction site images are provided. For example, image data captured from a construction site using at least one image sensor may be obtained. The image data may be analyzed to identify a region of the image data depicting at least part of an object, where the object is of an object type and made, at least partly, of concrete. The image data may be further analyzed to determine a quality indication associated with the concrete. The object type of the object may be used to select a threshold. The quality indication may be compared with the selected threshold. An indication to a user may be provided to a user based on a result of the comparison of the quality indication with the selected threshold.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: July 6, 2021
    Assignee: CONSTRU LTD
    Inventors: Michael Sasson, Ron Zass, Shalom Bellaish, Moshe Nachman
  • Patent number: 11048986
    Abstract: Disclosed is a method for state decision of image data. The method for state decision of image data may include: acquiring first output data by the network function based on the image data; acquiring second output data by an algorithm having a different effect from the network function based on the image data; and deciding state information of the image data based on the first output data and the second output data.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: June 29, 2021
    Assignee: SuaLab Co., Ltd.
    Inventors: Hyeong Shin Kang, Gwang Min Kim, Kyeong Mok Byeon
  • Patent number: 11049014
    Abstract: A feature model, which calculates a feature value of an input image, is trained on a plurality of first images. First feature values corresponding one-to-one with the first images are calculated using the feature model, and feature distribution information representing a relationship between a plurality of classes and the first feature values is generated. When a detection model which determines, in an input image, each region with an object and a class to which the object belongs is trained on a plurality of second images, second feature values corresponding to regions determined within the second images by the detection model are calculated using the feature model, an evaluation value, which indicates class determination accuracy of the detection model, is modified using the feature distribution information and the second feature values, and the detection model is updated based on the modified evaluation value.
    Type: Grant
    Filed: October 9, 2019
    Date of Patent: June 29, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Kanata Suzuki, Toshio Endoh
  • Patent number: 11042802
    Abstract: Predictive analytic models are hierarchically built based on a training dataset, which includes pairs of input data and output data. First, the input data and the output data are preprocessed. A hierarchical clustering process is performed on the dataset. The hierarchical clustering process comprises level-1 input and output data clustering, level-2 input and output data clustering, and so on, up to level-K input and output data clustering, where K is an integer greater than one. A hierarchical model building process is performed. The hierarchical model building process comprises level-1 model building over level-1 clustered input and output data, level-2 model building over level-2 clustered input and output data, and so on, up to level-K model building over level-K clustered input and output data. At least one level-K predictive model is generated as the resulting built model.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: June 22, 2021
    Assignees: Global Optimal Technology, Inc., ShanDong Global Optimal Big Data Science and Tech
    Inventors: Hsiao-Dong Chiang, Bin Wang, Hartley Chiang
  • Patent number: 11037339
    Abstract: The present disclosure relates to systems and methods for reconstructing an image in an imaging system. The methods may include obtaining scan data representing an intensity distribution of energy detected at a plurality of detector elements and determining an image estimate. The methods may further include determining an objective function based on the scan data and the image estimate. The objective function may include a regularization parameter. The methods may further include iteratively updating the image estimate until the objective function satisfies a termination criterion, and for each update, updating the regularization parameter based on a gradient of an updated image estimate. The methods may further include outputting a final image based on the updated image estimate when the objective function satisfies the termination criterion.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: June 15, 2021
    Assignee: UIH AMERICA, INC.
    Inventors: Zhicong Yu, Stanislav Zabic
  • Patent number: 11037287
    Abstract: A method for measuring critical dimension is provided. The method includes the steps of: receiving a critical-dimension scanning electron microscopy (CD-SEM) image of a semiconductor wafer; performing an image-sharpening process and an image de-noise process on the CD-SEM image to generate a first image; performing an edge detection process on the first image to generate a second image; performing a connected-component labeling process on the second image to generate an output image; and calculating a critical-dimension information table of the semiconductor wafer according to the output image.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: June 15, 2021
    Assignee: WINBOND ELECTRONICS CORP.
    Inventors: Ching-Ya Huang, Tso-Hua Hung
  • Patent number: 11030457
    Abstract: An apparatus and method for lane feature detection from an image is performed according to predetermined path geometry. An image including at least one path is received. The image may be an aerial image. Map data, corresponding to the at least one path and defining the predetermined path geometry is selected. The image is modified according to the selected map data including the predetermined path geometry. A lane feature prediction model is generated or configured based on the modified image. A subsequent image is provided to the lane feature prediction model for a prediction of at least one lane feature.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: June 8, 2021
    Assignee: HERE Global B.V.
    Inventor: Abhilshit Soni
  • Patent number: 11010932
    Abstract: An apparatus and a method for coloring line drawing is disclosed for: acquiring line drawing data; performing reduction processing on the line drawing data to be a predetermined reduced size to obtain reduced line drawing data; coloring the reduced line drawing data based on a first learned model which is learned in advance using sample data; and coloring original line drawing data with the colored reduced data and the original line drawing data as inputs based on a second learned model which is learned in advance.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: May 18, 2021
    Assignee: PREFERRED NETWORKS, INC.
    Inventor: Taizan Yonetsuji
  • Patent number: 11003945
    Abstract: Techniques are discussed for determining a location of a vehicle in an environment using a feature corresponding to a portion of an image representing an object in the environment which is associated with a frequently occurring object classification. For example, an image may be received and semantically segmented to associate pixels of the image with a label representing an object of an object type (e.g., extracting only those portions of the image which represent lane boundary markings). Features may then be extracted, or otherwise determined, which are limited to those portions of the image. In some examples, map data indicating a previously mapped location of a corresponding portion of the object may be used to determine a difference. The difference (or sum of differences for multiple observations) are then used to localize the vehicle with respect to the map.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: May 11, 2021
    Assignee: Zoox, Inc.
    Inventors: Derek Adams, Nathaniel Jon Kaiser, Michael Carsten Bosse
  • Patent number: 11003937
    Abstract: A system for extracting text from images comprises a processor configured to receive a digital copy of an image and identify a portion of the image, wherein the portion comprises text to be extracted. The processor further determines orientation of the portion of the image, and extracts text from the portion of the image considering the orientation of the portion of the image.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: May 11, 2021
    Assignee: Infrrd Inc
    Inventor: Akshay Uppal
  • Patent number: 11003941
    Abstract: Embodiments of the present application provide a character recognition method and device. The method includes obtaining a target image to be analyzed which contains a character (S101); inputting the target image into a pre-trained deep neural network to determine a feature map corresponding to a character region of the target image (S102); and performing character recognition on the feature map corresponding to the character region by the deep neural network to obtain the character contained in the target image (S103). The deep neural network is obtained by training with sample images, a result of labeling character regions in the sample images, and characters contained in the sample images. The method can improve the accuracy of character recognition.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: May 11, 2021
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.
    Inventor: Gang Zheng
  • Patent number: 10997492
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for performing data compression and conversion between data formats of varying degrees of precision, and more particularly for improving the inferencing (application) of artificial neural networks using a reduced precision (e.g., INT8) data format. Embodiments of the present invention generate candidate conversions of data output, then employ a relative measure of quality to identify the candidate conversion with the greatest accuracy (i.e., least divergence from the original higher precision values). The representation can be then be used during inference to perform computations on the resulting output data.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: May 4, 2021
    Assignee: Nvidia Corporation
    Inventors: Szymon Migacz, Hao Wu, Dilip Sequeira, Ujval Kapasi, Maxim Milakov, Slawomir Kierat, Zacky Zhou, Yilin Zhang, Alex Fit-Florea
  • Patent number: 10997472
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: May 4, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Charles Blundell, Oriol Vinyals
  • Patent number: 10990857
    Abstract: A processor-implemented object detection method is provided. The method receives an input image, generates a latent variable that indicates a feature distribution of the input image, and detects an object in the input image based on the generated latent variable.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: April 27, 2021
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB Foundation
    Inventors: Bee Lim, Changhyun Kim, Kyoung Mu Lee
  • Patent number: 10990719
    Abstract: In an embodiment, agricultural intelligence computer system stores a digital model of nutrient content in soil which includes a plurality of values and expressions that define transformations of or relationships between the values and produce estimates of nutrient content values in soil. The agricultural intelligence computer receives nutrient content measurement values for a particular field at a particular time. The agricultural intelligence computer system uses the digital model of nutrient content to compute a nutrient content value for the particular field at the particular time. The agricultural intelligence computer system identifies a modeling uncertainty corresponding to the computed nutrient content value and a measurement uncertainty corresponding to the received measurement values. Based on the identified uncertainties, the modeled nutrient content value, and the received measurement values, the agricultural intelligence computer system computes an assimilated nutrient content value.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 27, 2021
    Assignee: The Climate Corporation
    Inventor: Wayne Tai Lee
  • Patent number: 10990768
    Abstract: A method and device are provided for translating object information and acquiring derivative information, including obtaining, based on the acquired source-object information, target-object information corresponding to the source object by translation, and outputting the target-object information. A language environment corresponding to the source object is different from a language environment corresponding to the target object. By applying the present disclosure, the range of machine translation subjects can be expanded, and the applicability of translation can be enhanced, a user's requirements on translation of objects can be met.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: April 27, 2021
    Inventors: Mei Tu, Heng Yu
  • Patent number: 10991074
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using an image processing neural network system. One of the systems includes a domain transformation neural network implemented by one or more computers, wherein the domain transformation neural network is configured to: receive an input image from a source domain; and process a network input comprising the input image from the source domain to generate a transformed image that is a transformation of the input image from the source domain to a target domain that is different from the source domain.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: April 27, 2021
    Assignee: Google LLC
    Inventors: Konstantinos Bousmalis, Nathan Silberman, David Martin Dohan, Dumitru Erhan, Dilip Krishnan
  • Patent number: 10984293
    Abstract: An image processing method includes: acquiring a video stream of a vehicle by a camera according to a user instruction; obtaining an image corresponding to a frame in the video stream; determining whether the image meets a predetermined criterion by inputting the image into a classification model, the classification model comprising a first convolutional neural network; in response to the image meeting the predetermined criterion, adding at least one of a target box or target segmentation information to the image by inputting the image into a target detection and segmentation model, the at least one of the target box or the target segmentation information corresponding to at least one of a vehicle part or vehicle damage of the vehicle, the target detection and segmentation model comprising a convolutional neural network; and displaying the at least one of the target box or the target segmentation information to the user.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: April 20, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Xin Guo, Yuan Cheng, Chen Jiang, Zhihong Lu
  • Patent number: 10984247
    Abstract: An apparatus generates first context data representing a context of correction target text based on the correction target text, and corrects an error in the correction target text by inputting a character string of the correction target text, the generated first context data, and meta-information corresponding to the correction target text to a neural network that has been trained to correct an error in the correction target text by inputting a character string of text corresponding to training data, second context data representing a context of the text, and meta-information of the text.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: April 20, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Hiroto Takegawa, Yusuke Hamada, Satoru Sankoda
  • Patent number: 10977490
    Abstract: Systems and methods for analyzing image data to assess property damage are disclosed. According to certain aspects, a server may analyze segmented digital image data of a roof of a property using a convolutional neural network (CNN). The server may extract a set of features from a set of regions output by the CNN. Additionally, the server may analyze the set of features using an additional image model to generate a set of outputs indicative of a confidence level that actual hail damage is depicted in the set of regions.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: April 13, 2021
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Marigona Bokshi-Drotar, Jing Wan, Sandra Kane, Yuntao Li
  • Patent number: 10977783
    Abstract: The present disclosure discloses a system and a method. In an example implementation, the system and the method can receive a synthetic image at a first deep neural network, and determine, via the first deep neural network, a prediction indicative of whether the synthetic image is machine-generated or is sourced from the real data distribution. The prediction can comprise a quantitative measure of photorealism of synthetic image.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: April 13, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut Murali
  • Patent number: 10967824
    Abstract: An apparatus includes a first capture device, a second capture device and a processor. The first capture device may generate a first plurality of video frames corresponding to an interior view of a vehicle. The second capture device may generate a second plurality of video frames corresponding to an area outside of the vehicle. The processor may be configured to perform operations to detect objects in the video frames, detect occupants of the vehicle based on the objects detected in the first video frames, determine whether a potential collision is unavoidable based on the objects detected in the second video frames and select a reaction if the potential collision is unavoidable. The reaction may be selected to protect occupants determined to be vulnerable based on characteristics of the occupants. The characteristics may be determined by performing the operations on each of the occupants.
    Type: Grant
    Filed: April 28, 2018
    Date of Patent: April 6, 2021
    Assignee: Ambarella International LP
    Inventors: Shimon Pertsel, Patrick Martin
  • Patent number: 10964202
    Abstract: A home monitoring system according to an aspect of the present disclosure relates to a home monitoring system and, more particularly, to a home monitoring system that can prevent a safety accident when a child or a pet approaches a predetermined dangerous space or a home IoT device, by sensing it, controlling the home IoT device, and making a user recognize it. According to the home monitoring system of the present disclosure, one or more of an IoT device and a server of the present disclosure may be associated with an artificial intelligence module, a robot, an AR (Augmented Reality) device, a VR (Virtual Reality) device, a device associated with 5G services, etc.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: March 30, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Jichan Maeng, Beomoh Kim, Wonho Shin, Taehyun Kim, Jonghoon Chae
  • Patent number: 10957026
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
  • Patent number: 10956780
    Abstract: A system and processing methods for refining a convolutional neural network (CNN) to capture characterizing features of different classes are disclosed. In some embodiments, the system is programmed to start with the filters in one of the last few convolutional layers of the initial CNN, which often correspond to more class-specific features, rank them to hone in on more relevant filters, and update the initial CNN by turning off the less relevant filters in that one convolutional layer. The result is often a more generalized CNN that is rid of certain filters that do not help characterize the classes.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: March 23, 2021
    Assignee: THE CLIMATE CORPORATION
    Inventors: Ying She, Wei Guan
  • Patent number: 10949684
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a pair of synthetic stereo images and a corresponding synthetic depth map with an image synthesis engine wherein the synthetic stereo images correspond to real stereo images acquired by a stereo camera and the synthetic depth map is a three-dimensional (3D) map corresponding to a 3D scene viewed by the stereo camera and process each image of the pair of synthetic stereo images pair independently using a generative adversarial network (GAN) to generate a fake image, wherein the fake image corresponds to one of the synthetic stereo images.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: March 16, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut Murali
  • Patent number: 10949962
    Abstract: Provided is an x-ray detecting type of a component counter and a method for counting components using the same. The component counter includes: an image obtaining module to obtain an image of an object with an x-ray tube and a flat detector; an inputting frame located at the front of the image obtaining module and having a guiding surface; a transferring tray to move between the image obtaining module and the inputting frame along a moving guide installed at the guiding surface; and a foreign object sensor displaced at the inputting frame to detect a foreign object; wherein the detector has a horizontal section to corresponding to an investigating surface of the transferring tray.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: March 16, 2021
    Assignee: XAVIS CO., LTD
    Inventors: Hyeong-Cheol Kim, Bong-Jin Choi, Yong-Han Jang
  • Patent number: 10943672
    Abstract: The present invention relates to a web-based computer-aided method and a system for providing personalized recommendations about drug use, based on pharmacogenetic information regarding genes and genetic variants associated to metabolism and genes and genetic variants which are not associated to metabolism, and which comprises automatically generating and displaying, by means of a graphical user interface (GUI) of a dynamic webpage, the personalized recommendations highlighting the ones associated to the highest adverse drug reactions. The present invention also relates to a computer-readable medium which contains program instructions for a computer to perform the method for providing personalized recommendations about drug use of the invention. The present invention also relates to a web-based computer-aided method and a system for generating a dynamic webpage, and a further computer-readable medium which contains program instructions for a computer to perform the method for generating a dynamic webpage.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: March 9, 2021
    Inventors: Jordi Pratsevall Garcia, David Redondo Amado, Miquel Tuson Segarra, Silvia Vilches Saez, Jordi Espadaler Mazo, Ariana Salavert Larrosa, Miquel Angel Bonachera Sierra
  • Patent number: 10935892
    Abstract: Methods and systems are provided that, in some embodiments, print and process a layer. The layer can be on a wafer or on an application panel. Thereafter, locations of the features that were actually printed and processed are measured. Based upon differences between the measured differences and designed locations for those features at least one distortion model is created. Each distortion model is inverted to create a corresponding correction model. When there are multiple sections, a distortion model and a correction model can be created for each section. Multiple correction models can be combined to create a global correction model.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: March 2, 2021
    Assignee: APPLIED MATERIALS, INC.
    Inventors: Tamer Coskun, Thomas L. Laidig, Jang Fung Chen
  • Patent number: 10937243
    Abstract: Systems and methods for providing a real-world object interface in virtual, augmented, and mixed reality (xR) applications. In some embodiments, an Information Handling System (IHS) may include one or more processors and a memory coupled to the one or more processors, the memory including program instructions stored thereon that, upon execution by the one or more processors, cause the IHS to: receive a video frame during execution of an xR application; instruct a user wearing a Head-Mounted Display (HMD) to perform a manipulation of a real-world object detected in the video frame; receive additional video frames; determine whether the user has performed the manipulation by tracking the object in the additional video frames; and execute an operation in response to the determination.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: March 2, 2021
    Assignee: Dell Products, L.P.
    Inventors: Daniel L. Hamlin, Yagiz Can Yildiz
  • Patent number: 10936916
    Abstract: An exemplary device for classifying an image includes a receiving unit that receives image data. The device also includes a hardware processor including a neural network architecture to extract a plurality of features from the image data, filter each feature extracted from the image data, concatenate the plurality of filtered features to form an image vector, evaluate the plurality of concatenated features in first and second layers of a plurality of fully connected layers of the neural network architecture based on an amount of deviation in the features determined at each fully connected layer, and generate a data signal based on an output of the plurality of fully connected layers. A transmitting unit sends the data signal to a peripheral or remote device.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: March 2, 2021
    Assignee: BOOZ ALLEN HAMILTON INC.
    Inventors: Arash Rahnama-Moghaddam, Andre Tai Nguyen
  • Patent number: 10926579
    Abstract: Expert system and method for computing an application procedure for painting vinyl panels a target color at a paint application site while preventing vinyl warping. A target reflectivity indicator is computed for the vinyl panels once painted. When the target color and target reflectivity indicator are met by application of the multiple paint layers, the application procedure computed identifies a pigmentable waterborne paint composition, one or more proportioned paint pigments to achieve the target color at a predictable transmissivity and a preparatory waterborne paint composition considering a corresponding predictable reflectivity. Applying the pigmentable waterborne paint composition, once pigmented with the one or more proportioned paint pigments, over the preparatory waterborne paint composition meets the target reflectivity indicator for the vinyl panels at the target color.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: February 23, 2021
    Assignee: SPRAY-NET FRANCHISES INC.
    Inventors: Carmelo Marsala, Peiman Arabi
  • Patent number: 10929561
    Abstract: A device for removal of personally identifiable data receives monitoring data acquired by a sensor. The monitoring data including personally identifiable data relating to one or more individuals being monitored. The system processes the acquired monitoring data to remove the personally identifiable data by at least one of abstraction or redaction while the monitoring data is located on the device. The processed monitoring data having the personally identifiable data removed can thereby be transmitted external to the device with reduced security risk.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Donna K. Long, John B. Hesketh, LaSean T. Smith, Kenneth L. Kiemele, Evan L. Jones
  • Patent number: 10922580
    Abstract: A method includes receiving, by a device, a first image of a scene and a second image of at least a portion of the scene. The method includes identifying a first plurality of features from the first image and comparing the first plurality of features to a second plurality of features from the second image to identify a common feature. The method includes determining a particular subset of pixels that corresponds to the common feature, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image. The method also includes generating a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: February 16, 2021
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Amy Ruth Reibman, Zhu Liu, Lee Begeja, Bernard S. Renger, David Crawford Gibbon, Behzad Shahraray, Raghuraman Gopalan, Eric Zavesky
  • Patent number: 10909743
    Abstract: Generating texture maps for use in rendering visual output. According to a first aspect, there is provided a method for generating textures for use in rendering visual output, the method comprising the steps of: generating, using a first hierarchical algorithm, a first texture from one or more sets of initialisation data; and selectively refining the first texture, using one or more further hierarchical algorithms, to generate one or more further textures from at least a section of the first texture and one or more sets of further initialisation data; wherein at least a section of each of the one or more further textures differs from the first texture.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 2, 2021
    Assignee: Magic Pony Technology Limited
    Inventors: Lucas Theis, Zehan Wang, Robert David Bishop
  • Patent number: 10898222
    Abstract: A method of segmenting images of biological specimens using adaptive classification to segment a biological specimen into different types of tissue regions. The segmentation is performed by, first, extracting features from the neighborhood of a grid of points (GPs) sampled on the whole-slide (WS) image and classifying them into different tissue types. Secondly, an adaptive classification procedure is performed where some or all of the GPs in a WS image are classified using a pre-built training database, and classification confidence scores for the GPs are generated. The classified GPs with high confidence scores are utilized to generate an adaptive training database, which is then used to re-classify the low confidence GPs. The motivation of the method is that the strong variation of tissue appearance makes the classification problem more challenging, while good classification results are obtained when the training and test data origin from the same slide.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: January 26, 2021
    Assignee: VENTANA MEDICAL SYSTEMS, INC.
    Inventors: Joerg Bredno, Christophe Chefd'hotel, Ting Chen, Srinivas Chukka, Kien Nguyen