Neural Networks Patents (Class 382/156)
  • Patent number: 11093798
    Abstract: A method includes receiving a user object specified by a user. A similarity score is computed using a similarity function between the user object and one or more candidate objects in a database based on respective feature vectors. A first subset of the one or more candidate objects is presented to the user based on the respective computed similarity scores. First feedback is received from the user about the first subset of candidate objects. The similarity function is adjusted based on the received first feedback.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: August 17, 2021
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Francisco E. Torres, Hoda Eldardiry, Matthew Shreve, Gaurang Gavai, Chad Ramos
  • Patent number: 11087144
    Abstract: The present disclosure relates to systems, devices and methods for identifying objects and scenarios that have not been trained or are unidentifiable to vehicle perception sensors or vehicle assistive driving systems. Embodiments are directed to using a trained vehicle data set to identify target objects in vehicle sensor data. In one embodiment, a process is provided that includes running a scene detection operation on vehicle to derive a vector of target object attributes of the vehicle sensor data and generating a vector representation for the scene detection operation and the attributes of the vehicle sensor data. The vector representation compared to a familiarity vector to represent effectiveness of the scene detection operation. In addition, the vector representation can be scored to identify one or more target objects or significant scenarios, including unidentifiable objects and/or driving scenes, scenarios for reporting.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: August 10, 2021
    Assignee: Harman International Industries, Incorporated
    Inventors: Aaron Thompson, Honghao Tan
  • Patent number: 11087488
    Abstract: Disclosed are methods, apparatus and systems for gesture recognition based on neural network processing. One exemplary method for identifying a gesture communicated by a subject includes receiving a plurality of images associated with the gesture, providing the plurality of images to a first 3-dimensional convolutional neural network (3D CNN) and a second 3D CNN, where the first 3D CNN is operable to produce motion information, where the second 3D CNN is operable to produce pose and color information, and where the first 3D CNN is operable to implement an optical flow algorithm to detect the gesture, fusing the motion information and the pose and color information to produce an identification of the gesture, and determining whether the identification corresponds to a singular gesture across the plurality of images using a recurrent neural network that comprises one or more long short-term memory units.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: August 10, 2021
    Assignee: AVODAH, INC.
    Inventors: Trevor Chandler, Dallas Nash, Michael Menefee
  • Patent number: 11080562
    Abstract: A method includes obtaining training samples that include images that depict objects and annotations of annotated key point locations for the objects. The method also includes training a machine learning model to determine estimated key point locations for the objects and key point uncertainty values for the estimated key point locations by minimizing a loss function that is based in part on a key point localization loss value that represents a difference between the annotated key point locations and the estimated key point locations values and is weighted by the key point uncertainty values.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: August 3, 2021
    Assignee: Apple Inc.
    Inventors: Shreyas Saxena, Wenda Wang, Guanhang Wu, Nitish Srivastava, Dimitrios Kottas, Cuneyt Oncel Tuzel, Luciano Spinello, Ricardo da Silveira Cabral
  • Patent number: 11055841
    Abstract: Systems and methods for determining the quality of concrete from construction site images are provided. For example, image data captured from a construction site using at least one image sensor may be obtained. The image data may be analyzed to identify a region of the image data depicting at least part of an object, where the object is of an object type and made, at least partly, of concrete. The image data may be further analyzed to determine a quality indication associated with the concrete. The object type of the object may be used to select a threshold. The quality indication may be compared with the selected threshold. An indication to a user may be provided to a user based on a result of the comparison of the quality indication with the selected threshold.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: July 6, 2021
    Assignee: CONSTRU LTD
    Inventors: Michael Sasson, Ron Zass, Shalom Bellaish, Moshe Nachman
  • Patent number: 11048986
    Abstract: Disclosed is a method for state decision of image data. The method for state decision of image data may include: acquiring first output data by the network function based on the image data; acquiring second output data by an algorithm having a different effect from the network function based on the image data; and deciding state information of the image data based on the first output data and the second output data.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: June 29, 2021
    Assignee: SuaLab Co., Ltd.
    Inventors: Hyeong Shin Kang, Gwang Min Kim, Kyeong Mok Byeon
  • Patent number: 11049014
    Abstract: A feature model, which calculates a feature value of an input image, is trained on a plurality of first images. First feature values corresponding one-to-one with the first images are calculated using the feature model, and feature distribution information representing a relationship between a plurality of classes and the first feature values is generated. When a detection model which determines, in an input image, each region with an object and a class to which the object belongs is trained on a plurality of second images, second feature values corresponding to regions determined within the second images by the detection model are calculated using the feature model, an evaluation value, which indicates class determination accuracy of the detection model, is modified using the feature distribution information and the second feature values, and the detection model is updated based on the modified evaluation value.
    Type: Grant
    Filed: October 9, 2019
    Date of Patent: June 29, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Kanata Suzuki, Toshio Endoh
  • Patent number: 11042802
    Abstract: Predictive analytic models are hierarchically built based on a training dataset, which includes pairs of input data and output data. First, the input data and the output data are preprocessed. A hierarchical clustering process is performed on the dataset. The hierarchical clustering process comprises level-1 input and output data clustering, level-2 input and output data clustering, and so on, up to level-K input and output data clustering, where K is an integer greater than one. A hierarchical model building process is performed. The hierarchical model building process comprises level-1 model building over level-1 clustered input and output data, level-2 model building over level-2 clustered input and output data, and so on, up to level-K model building over level-K clustered input and output data. At least one level-K predictive model is generated as the resulting built model.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: June 22, 2021
    Assignees: Global Optimal Technology, Inc., ShanDong Global Optimal Big Data Science and Tech
    Inventors: Hsiao-Dong Chiang, Bin Wang, Hartley Chiang
  • Patent number: 11037339
    Abstract: The present disclosure relates to systems and methods for reconstructing an image in an imaging system. The methods may include obtaining scan data representing an intensity distribution of energy detected at a plurality of detector elements and determining an image estimate. The methods may further include determining an objective function based on the scan data and the image estimate. The objective function may include a regularization parameter. The methods may further include iteratively updating the image estimate until the objective function satisfies a termination criterion, and for each update, updating the regularization parameter based on a gradient of an updated image estimate. The methods may further include outputting a final image based on the updated image estimate when the objective function satisfies the termination criterion.
    Type: Grant
    Filed: April 17, 2019
    Date of Patent: June 15, 2021
    Assignee: UIH AMERICA, INC.
    Inventors: Zhicong Yu, Stanislav Zabic
  • Patent number: 11037287
    Abstract: A method for measuring critical dimension is provided. The method includes the steps of: receiving a critical-dimension scanning electron microscopy (CD-SEM) image of a semiconductor wafer; performing an image-sharpening process and an image de-noise process on the CD-SEM image to generate a first image; performing an edge detection process on the first image to generate a second image; performing a connected-component labeling process on the second image to generate an output image; and calculating a critical-dimension information table of the semiconductor wafer according to the output image.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: June 15, 2021
    Assignee: WINBOND ELECTRONICS CORP.
    Inventors: Ching-Ya Huang, Tso-Hua Hung
  • Patent number: 11030457
    Abstract: An apparatus and method for lane feature detection from an image is performed according to predetermined path geometry. An image including at least one path is received. The image may be an aerial image. Map data, corresponding to the at least one path and defining the predetermined path geometry is selected. The image is modified according to the selected map data including the predetermined path geometry. A lane feature prediction model is generated or configured based on the modified image. A subsequent image is provided to the lane feature prediction model for a prediction of at least one lane feature.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: June 8, 2021
    Assignee: HERE Global B.V.
    Inventor: Abhilshit Soni
  • Patent number: 11010932
    Abstract: An apparatus and a method for coloring line drawing is disclosed for: acquiring line drawing data; performing reduction processing on the line drawing data to be a predetermined reduced size to obtain reduced line drawing data; coloring the reduced line drawing data based on a first learned model which is learned in advance using sample data; and coloring original line drawing data with the colored reduced data and the original line drawing data as inputs based on a second learned model which is learned in advance.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: May 18, 2021
    Assignee: PREFERRED NETWORKS, INC.
    Inventor: Taizan Yonetsuji
  • Patent number: 11003941
    Abstract: Embodiments of the present application provide a character recognition method and device. The method includes obtaining a target image to be analyzed which contains a character (S101); inputting the target image into a pre-trained deep neural network to determine a feature map corresponding to a character region of the target image (S102); and performing character recognition on the feature map corresponding to the character region by the deep neural network to obtain the character contained in the target image (S103). The deep neural network is obtained by training with sample images, a result of labeling character regions in the sample images, and characters contained in the sample images. The method can improve the accuracy of character recognition.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: May 11, 2021
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.
    Inventor: Gang Zheng
  • Patent number: 11003945
    Abstract: Techniques are discussed for determining a location of a vehicle in an environment using a feature corresponding to a portion of an image representing an object in the environment which is associated with a frequently occurring object classification. For example, an image may be received and semantically segmented to associate pixels of the image with a label representing an object of an object type (e.g., extracting only those portions of the image which represent lane boundary markings). Features may then be extracted, or otherwise determined, which are limited to those portions of the image. In some examples, map data indicating a previously mapped location of a corresponding portion of the object may be used to determine a difference. The difference (or sum of differences for multiple observations) are then used to localize the vehicle with respect to the map.
    Type: Grant
    Filed: May 22, 2019
    Date of Patent: May 11, 2021
    Assignee: Zoox, Inc.
    Inventors: Derek Adams, Nathaniel Jon Kaiser, Michael Carsten Bosse
  • Patent number: 11003937
    Abstract: A system for extracting text from images comprises a processor configured to receive a digital copy of an image and identify a portion of the image, wherein the portion comprises text to be extracted. The processor further determines orientation of the portion of the image, and extracts text from the portion of the image considering the orientation of the portion of the image.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: May 11, 2021
    Assignee: Infrrd Inc
    Inventor: Akshay Uppal
  • Patent number: 10997472
    Abstract: Methods, systems, and apparatus for classifying a new example using a comparison set of comparison examples. One method includes maintaining a comparison set, the comparison set including comparison examples and a respective label vector for each of the comparison examples, each label vector including a respective score for each label in a predetermined set of labels; receiving a new example; determining a respective attention weight for each comparison example by applying a neural network attention mechanism to the new example and to the comparison examples; and generating a respective label score for each label in the predetermined set of labels from, for each of the comparison examples, the respective attention weight for the comparison example and the respective label vector for the comparison example, in which the respective label score for each of the labels represents a likelihood that the label is a correct label for the new example.
    Type: Grant
    Filed: May 19, 2017
    Date of Patent: May 4, 2021
    Assignee: DeepMind Technologies Limited
    Inventors: Charles Blundell, Oriol Vinyals
  • Patent number: 10997492
    Abstract: Aspects of the present invention are directed to computer-implemented techniques for performing data compression and conversion between data formats of varying degrees of precision, and more particularly for improving the inferencing (application) of artificial neural networks using a reduced precision (e.g., INT8) data format. Embodiments of the present invention generate candidate conversions of data output, then employ a relative measure of quality to identify the candidate conversion with the greatest accuracy (i.e., least divergence from the original higher precision values). The representation can be then be used during inference to perform computations on the resulting output data.
    Type: Grant
    Filed: December 11, 2017
    Date of Patent: May 4, 2021
    Assignee: Nvidia Corporation
    Inventors: Szymon Migacz, Hao Wu, Dilip Sequeira, Ujval Kapasi, Maxim Milakov, Slawomir Kierat, Zacky Zhou, Yilin Zhang, Alex Fit-Florea
  • Patent number: 10990719
    Abstract: In an embodiment, agricultural intelligence computer system stores a digital model of nutrient content in soil which includes a plurality of values and expressions that define transformations of or relationships between the values and produce estimates of nutrient content values in soil. The agricultural intelligence computer receives nutrient content measurement values for a particular field at a particular time. The agricultural intelligence computer system uses the digital model of nutrient content to compute a nutrient content value for the particular field at the particular time. The agricultural intelligence computer system identifies a modeling uncertainty corresponding to the computed nutrient content value and a measurement uncertainty corresponding to the received measurement values. Based on the identified uncertainties, the modeled nutrient content value, and the received measurement values, the agricultural intelligence computer system computes an assimilated nutrient content value.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 27, 2021
    Assignee: The Climate Corporation
    Inventor: Wayne Tai Lee
  • Patent number: 10990768
    Abstract: A method and device are provided for translating object information and acquiring derivative information, including obtaining, based on the acquired source-object information, target-object information corresponding to the source object by translation, and outputting the target-object information. A language environment corresponding to the source object is different from a language environment corresponding to the target object. By applying the present disclosure, the range of machine translation subjects can be expanded, and the applicability of translation can be enhanced, a user's requirements on translation of objects can be met.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: April 27, 2021
    Inventors: Mei Tu, Heng Yu
  • Patent number: 10990857
    Abstract: A processor-implemented object detection method is provided. The method receives an input image, generates a latent variable that indicates a feature distribution of the input image, and detects an object in the input image based on the generated latent variable.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: April 27, 2021
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB Foundation
    Inventors: Bee Lim, Changhyun Kim, Kyoung Mu Lee
  • Patent number: 10991074
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using an image processing neural network system. One of the systems includes a domain transformation neural network implemented by one or more computers, wherein the domain transformation neural network is configured to: receive an input image from a source domain; and process a network input comprising the input image from the source domain to generate a transformed image that is a transformation of the input image from the source domain to a target domain that is different from the source domain.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: April 27, 2021
    Assignee: Google LLC
    Inventors: Konstantinos Bousmalis, Nathan Silberman, David Martin Dohan, Dumitru Erhan, Dilip Krishnan
  • Patent number: 10984293
    Abstract: An image processing method includes: acquiring a video stream of a vehicle by a camera according to a user instruction; obtaining an image corresponding to a frame in the video stream; determining whether the image meets a predetermined criterion by inputting the image into a classification model, the classification model comprising a first convolutional neural network; in response to the image meeting the predetermined criterion, adding at least one of a target box or target segmentation information to the image by inputting the image into a target detection and segmentation model, the at least one of the target box or the target segmentation information corresponding to at least one of a vehicle part or vehicle damage of the vehicle, the target detection and segmentation model comprising a convolutional neural network; and displaying the at least one of the target box or the target segmentation information to the user.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: April 20, 2021
    Assignee: Advanced New Technologies Co., Ltd.
    Inventors: Xin Guo, Yuan Cheng, Chen Jiang, Zhihong Lu
  • Patent number: 10984247
    Abstract: An apparatus generates first context data representing a context of correction target text based on the correction target text, and corrects an error in the correction target text by inputting a character string of the correction target text, the generated first context data, and meta-information corresponding to the correction target text to a neural network that has been trained to correct an error in the correction target text by inputting a character string of text corresponding to training data, second context data representing a context of the text, and meta-information of the text.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: April 20, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Hiroto Takegawa, Yusuke Hamada, Satoru Sankoda
  • Patent number: 10977490
    Abstract: Systems and methods for analyzing image data to assess property damage are disclosed. According to certain aspects, a server may analyze segmented digital image data of a roof of a property using a convolutional neural network (CNN). The server may extract a set of features from a set of regions output by the CNN. Additionally, the server may analyze the set of features using an additional image model to generate a set of outputs indicative of a confidence level that actual hail damage is depicted in the set of regions.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: April 13, 2021
    Assignee: State Farm Mutual Automobile Insurance Company
    Inventors: Marigona Bokshi-Drotar, Jing Wan, Sandra Kane, Yuntao Li
  • Patent number: 10977783
    Abstract: The present disclosure discloses a system and a method. In an example implementation, the system and the method can receive a synthetic image at a first deep neural network, and determine, via the first deep neural network, a prediction indicative of whether the synthetic image is machine-generated or is sourced from the real data distribution. The prediction can comprise a quantitative measure of photorealism of synthetic image.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: April 13, 2021
    Assignee: Ford Global Technologies, LLC
    Inventors: Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut Murali
  • Patent number: 10967824
    Abstract: An apparatus includes a first capture device, a second capture device and a processor. The first capture device may generate a first plurality of video frames corresponding to an interior view of a vehicle. The second capture device may generate a second plurality of video frames corresponding to an area outside of the vehicle. The processor may be configured to perform operations to detect objects in the video frames, detect occupants of the vehicle based on the objects detected in the first video frames, determine whether a potential collision is unavoidable based on the objects detected in the second video frames and select a reaction if the potential collision is unavoidable. The reaction may be selected to protect occupants determined to be vulnerable based on characteristics of the occupants. The characteristics may be determined by performing the operations on each of the occupants.
    Type: Grant
    Filed: April 28, 2018
    Date of Patent: April 6, 2021
    Assignee: Ambarella International LP
    Inventors: Shimon Pertsel, Patrick Martin
  • Patent number: 10964202
    Abstract: A home monitoring system according to an aspect of the present disclosure relates to a home monitoring system and, more particularly, to a home monitoring system that can prevent a safety accident when a child or a pet approaches a predetermined dangerous space or a home IoT device, by sensing it, controlling the home IoT device, and making a user recognize it. According to the home monitoring system of the present disclosure, one or more of an IoT device and a server of the present disclosure may be associated with an artificial intelligence module, a robot, an AR (Augmented Reality) device, a VR (Virtual Reality) device, a device associated with 5G services, etc.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: March 30, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Jichan Maeng, Beomoh Kim, Wonho Shin, Taehyun Kim, Jonghoon Chae
  • Patent number: 10956780
    Abstract: A system and processing methods for refining a convolutional neural network (CNN) to capture characterizing features of different classes are disclosed. In some embodiments, the system is programmed to start with the filters in one of the last few convolutional layers of the initial CNN, which often correspond to more class-specific features, rank them to hone in on more relevant filters, and update the initial CNN by turning off the less relevant filters in that one convolutional layer. The result is often a more generalized CNN that is rid of certain filters that do not help characterize the classes.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: March 23, 2021
    Assignee: THE CLIMATE CORPORATION
    Inventors: Ying She, Wei Guan
  • Patent number: 10957026
    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate high-dynamic range lighting parameters for input low-dynamic range images. The high-dynamic range lighting parameters can be based on sky color, sky turbidity, sun color, sun shape, and sun position. Such input low-dynamic range images can be low-dynamic range panorama images or low-dynamic range standard images. Such a neural network system can apply the estimates high-dynamic range lighting parameters to objects added to the low-dynamic range images.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Jinsong Zhang, Kalyan K. Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenmann, Jean-Francois Lalonde
  • Patent number: 10949684
    Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a pair of synthetic stereo images and a corresponding synthetic depth map with an image synthesis engine wherein the synthetic stereo images correspond to real stereo images acquired by a stereo camera and the synthetic depth map is a three-dimensional (3D) map corresponding to a 3D scene viewed by the stereo camera and process each image of the pair of synthetic stereo images pair independently using a generative adversarial network (GAN) to generate a fake image, wherein the fake image corresponds to one of the synthetic stereo images.
    Type: Grant
    Filed: May 8, 2019
    Date of Patent: March 16, 2021
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Nikita Jaipuria, Gautham Sholingar, Vidya Nariyambut Murali
  • Patent number: 10949962
    Abstract: Provided is an x-ray detecting type of a component counter and a method for counting components using the same. The component counter includes: an image obtaining module to obtain an image of an object with an x-ray tube and a flat detector; an inputting frame located at the front of the image obtaining module and having a guiding surface; a transferring tray to move between the image obtaining module and the inputting frame along a moving guide installed at the guiding surface; and a foreign object sensor displaced at the inputting frame to detect a foreign object; wherein the detector has a horizontal section to corresponding to an investigating surface of the transferring tray.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: March 16, 2021
    Assignee: XAVIS CO., LTD
    Inventors: Hyeong-Cheol Kim, Bong-Jin Choi, Yong-Han Jang
  • Patent number: 10943672
    Abstract: The present invention relates to a web-based computer-aided method and a system for providing personalized recommendations about drug use, based on pharmacogenetic information regarding genes and genetic variants associated to metabolism and genes and genetic variants which are not associated to metabolism, and which comprises automatically generating and displaying, by means of a graphical user interface (GUI) of a dynamic webpage, the personalized recommendations highlighting the ones associated to the highest adverse drug reactions. The present invention also relates to a computer-readable medium which contains program instructions for a computer to perform the method for providing personalized recommendations about drug use of the invention. The present invention also relates to a web-based computer-aided method and a system for generating a dynamic webpage, and a further computer-readable medium which contains program instructions for a computer to perform the method for generating a dynamic webpage.
    Type: Grant
    Filed: December 10, 2014
    Date of Patent: March 9, 2021
    Inventors: Jordi Pratsevall Garcia, David Redondo Amado, Miquel Tuson Segarra, Silvia Vilches Saez, Jordi Espadaler Mazo, Ariana Salavert Larrosa, Miquel Angel Bonachera Sierra
  • Patent number: 10937243
    Abstract: Systems and methods for providing a real-world object interface in virtual, augmented, and mixed reality (xR) applications. In some embodiments, an Information Handling System (IHS) may include one or more processors and a memory coupled to the one or more processors, the memory including program instructions stored thereon that, upon execution by the one or more processors, cause the IHS to: receive a video frame during execution of an xR application; instruct a user wearing a Head-Mounted Display (HMD) to perform a manipulation of a real-world object detected in the video frame; receive additional video frames; determine whether the user has performed the manipulation by tracking the object in the additional video frames; and execute an operation in response to the determination.
    Type: Grant
    Filed: August 29, 2018
    Date of Patent: March 2, 2021
    Assignee: Dell Products, L.P.
    Inventors: Daniel L. Hamlin, Yagiz Can Yildiz
  • Patent number: 10936916
    Abstract: An exemplary device for classifying an image includes a receiving unit that receives image data. The device also includes a hardware processor including a neural network architecture to extract a plurality of features from the image data, filter each feature extracted from the image data, concatenate the plurality of filtered features to form an image vector, evaluate the plurality of concatenated features in first and second layers of a plurality of fully connected layers of the neural network architecture based on an amount of deviation in the features determined at each fully connected layer, and generate a data signal based on an output of the plurality of fully connected layers. A transmitting unit sends the data signal to a peripheral or remote device.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: March 2, 2021
    Assignee: BOOZ ALLEN HAMILTON INC.
    Inventors: Arash Rahnama-Moghaddam, Andre Tai Nguyen
  • Patent number: 10935892
    Abstract: Methods and systems are provided that, in some embodiments, print and process a layer. The layer can be on a wafer or on an application panel. Thereafter, locations of the features that were actually printed and processed are measured. Based upon differences between the measured differences and designed locations for those features at least one distortion model is created. Each distortion model is inverted to create a corresponding correction model. When there are multiple sections, a distortion model and a correction model can be created for each section. Multiple correction models can be combined to create a global correction model.
    Type: Grant
    Filed: May 15, 2017
    Date of Patent: March 2, 2021
    Assignee: APPLIED MATERIALS, INC.
    Inventors: Tamer Coskun, Thomas L. Laidig, Jang Fung Chen
  • Patent number: 10929561
    Abstract: A device for removal of personally identifiable data receives monitoring data acquired by a sensor. The monitoring data including personally identifiable data relating to one or more individuals being monitored. The system processes the acquired monitoring data to remove the personally identifiable data by at least one of abstraction or redaction while the monitoring data is located on the device. The processed monitoring data having the personally identifiable data removed can thereby be transmitted external to the device with reduced security risk.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Donna K. Long, John B. Hesketh, LaSean T. Smith, Kenneth L. Kiemele, Evan L. Jones
  • Patent number: 10926579
    Abstract: Expert system and method for computing an application procedure for painting vinyl panels a target color at a paint application site while preventing vinyl warping. A target reflectivity indicator is computed for the vinyl panels once painted. When the target color and target reflectivity indicator are met by application of the multiple paint layers, the application procedure computed identifies a pigmentable waterborne paint composition, one or more proportioned paint pigments to achieve the target color at a predictable transmissivity and a preparatory waterborne paint composition considering a corresponding predictable reflectivity. Applying the pigmentable waterborne paint composition, once pigmented with the one or more proportioned paint pigments, over the preparatory waterborne paint composition meets the target reflectivity indicator for the vinyl panels at the target color.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: February 23, 2021
    Assignee: SPRAY-NET FRANCHISES INC.
    Inventors: Carmelo Marsala, Peiman Arabi
  • Patent number: 10922580
    Abstract: A method includes receiving, by a device, a first image of a scene and a second image of at least a portion of the scene. The method includes identifying a first plurality of features from the first image and comparing the first plurality of features to a second plurality of features from the second image to identify a common feature. The method includes determining a particular subset of pixels that corresponds to the common feature, the particular subset of pixels corresponding to a first subset of pixels of the first image and a second subset of pixels of the second image. The method also includes generating a first image quality estimate of the first image based on a comparison of a first degree of variation within the first subset of pixels and a second degree of variation within the second subset of pixels.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: February 16, 2021
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Amy Ruth Reibman, Zhu Liu, Lee Begeja, Bernard S. Renger, David Crawford Gibbon, Behzad Shahraray, Raghuraman Gopalan, Eric Zavesky
  • Patent number: 10909743
    Abstract: Generating texture maps for use in rendering visual output. According to a first aspect, there is provided a method for generating textures for use in rendering visual output, the method comprising the steps of: generating, using a first hierarchical algorithm, a first texture from one or more sets of initialisation data; and selectively refining the first texture, using one or more further hierarchical algorithms, to generate one or more further textures from at least a section of the first texture and one or more sets of further initialisation data; wherein at least a section of each of the one or more further textures differs from the first texture.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: February 2, 2021
    Assignee: Magic Pony Technology Limited
    Inventors: Lucas Theis, Zehan Wang, Robert David Bishop
  • Patent number: 10898222
    Abstract: A method of segmenting images of biological specimens using adaptive classification to segment a biological specimen into different types of tissue regions. The segmentation is performed by, first, extracting features from the neighborhood of a grid of points (GPs) sampled on the whole-slide (WS) image and classifying them into different tissue types. Secondly, an adaptive classification procedure is performed where some or all of the GPs in a WS image are classified using a pre-built training database, and classification confidence scores for the GPs are generated. The classified GPs with high confidence scores are utilized to generate an adaptive training database, which is then used to re-classify the low confidence GPs. The motivation of the method is that the strong variation of tissue appearance makes the classification problem more challenging, while good classification results are obtained when the training and test data origin from the same slide.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: January 26, 2021
    Assignee: VENTANA MEDICAL SYSTEMS, INC.
    Inventors: Joerg Bredno, Christophe Chefd'hotel, Ting Chen, Srinivas Chukka, Kien Nguyen
  • Patent number: 10891537
    Abstract: This application discloses a convolutional neural network-based image processing method and image processing apparatus in the artificial intelligence field. The method may include: receiving an input image; preprocessing the input image to obtain preprocessed image information; and performing convolution on the image information using a convolutional neural network, and outputting a convolution result. In embodiments of this application, the image processing apparatus may store primary convolution kernels of convolution layers, and before performing convolution using the convolution layers, generate secondary convolution kernels using the primary convolution kernels of the convolution layers.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: January 12, 2021
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yunhe Wang, Chunjing Xu, Kai Han
  • Patent number: 10885660
    Abstract: The present disclosure provides an object detection method, an object detection device, an object detection system and a storage medium. The object detection method includes: acquiring an image to be processed; and inputting the image to be processed into a neural network to obtain a feature map outputted by the neural network. The feature map includes position channels and attribute channels; the position channels include at least one group of candidate position information respectively corresponding to at least one candidate position of at least one prediction object in the image to be processed; and the attribute channels include at least one group of candidate attribute information respectively corresponding to the at least one candidate position.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: January 5, 2021
    Assignee: BEIJING KUANGSHI TECHNOLOGY CO., LTD.
    Inventors: Shuchang Zhou, Yi Yang, Peiqin Sun
  • Patent number: 10850693
    Abstract: An apparatus includes a capture device and a processor. The capture device may be configured to generate a plurality of video frames corresponding to users of a vehicle. The processor may be configured to perform operations to detect objects in the video frames, detect users of the vehicle based on the objects detected in the video frames, determine a comfort profile for the users and select a reaction to adjust vehicle components according to the comfort profile of the detected users. The comfort profile may be determined in response to characteristics of the users. The characteristics of the users may be determined by performing the operations on each of the users.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: December 1, 2020
    Assignee: Ambarella International LP
    Inventors: Shimon Pertsel, Patrick Martin
  • Patent number: 10846523
    Abstract: Embodiments of the present disclosure include a method that obtains a digital image. The method includes extracting a word block from the digital image. The method includes processing the word block by evaluating a value of the word block against a dictionary. The method includes outputting a prediction equal to a common word in the dictionary when a confidence factor is greater than a predetermined threshold. The method includes processing the word block and assigning a descriptor to the word block corresponding to a property of the word block. The method includes processing the word block using the descriptor to prioritize evaluation of the word block. The method includes concatenating a first output and a second output. The method includes predicting a value of the word block.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: November 24, 2020
    Assignee: KODAK ALARIS INC.
    Inventors: Felipe Petroski Such, Raymond Ptucha, Frank Brockler, Paul Hutkowski
  • Patent number: 10832128
    Abstract: A transfer learning apparatus includes a transfer target data evaluator and an output layer adjuster. The transfer target data evaluator inputs a plurality of labeled transfer target data items each assigned a label of a corresponding evaluation item from among one or more evaluation items to a neural network apparatus having been trained by using a plurality of labeled transfer source data items and including in an output layer output units, the number of which is larger than or equal to the number of evaluation items, and obtains evaluation values output from the respective output units. The output layer adjuster preferentially assigns, to each of the one or more evaluation items, an output unit from which the evaluation value having the smallest difference from the label of the evaluation item is obtained with a higher frequency, as an output unit that outputs the evaluation value of the evaluation item.
    Type: Grant
    Filed: January 17, 2016
    Date of Patent: November 10, 2020
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yoshihide Sawada, Kazuki Kozuka
  • Patent number: 10832123
    Abstract: The present invention relates to artificial neural networks, for example, deep neural networks. In particular, the present invention relates to a compression method for deep neural networks with proper use of mask and the device thereof. More specifically, the present invention relates to how to compress dense neural networks into sparse neural networks while maintaining or even improving the accuracy of the neural networks after compression.
    Type: Grant
    Filed: December 26, 2016
    Date of Patent: November 10, 2020
    Assignee: XILINX TECHNOLOGY BEIJING LIMITED
    Inventors: Shijie Sun, Song Han, Xin Li, Yi Shan
  • Patent number: 10825205
    Abstract: Provided is an artificial intelligence (AI) decoding apparatus includes: a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory, the processor is configured to: obtain AI data related to AI down-scaling an original image to a first image; obtain image data corresponding to an encoding result on the first image; obtain a second image corresponding to the first image by performing a decoding on the image data; obtain deep neural network (DNN) setting information among a plurality of DNN setting information from the AI data; and obtain, by an up-scaling DNN, a third image by performing the AI up-scaling on the second image, the up-scaling DNN being configured with the obtained DNN setting information, wherein the plurality of DNN setting information comprises a parameter used in the up-scaling DNN, the parameter being obtained through joint training of the up-scaling DNN and a down-scaling DNN, and wherein the down-scaling DNN is used to
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: November 3, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaehwan Kim, Jongseok Lee, Sunyoung Jeon, Kwangpyo Choi, Minseok Choi, Quockhanh Dinh, Youngo Park
  • Patent number: 10825203
    Abstract: Provided is an artificial intelligence (AI) decoding apparatus includes: a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory, the processor is configured to: obtain AI data related to AI down-scaling an original image to a first image; obtain image data corresponding to an encoding result on the first image; obtain a second image corresponding to the first image by performing a decoding on the image data; obtain deep neural network (DNN) setting information among a plurality of DNN setting information from the AI data; and obtain, by an up-scaling DNN, a third image by performing the AI up-scaling on the second image, the up-scaling DNN being configured with the obtained DNN setting information, wherein the plurality of DNN setting information comprises a parameter used in the up-scaling DNN, the parameter being obtained through joint training of the up-scaling DNN and a down-scaling DNN, and wherein the down-scaling DNN is used to
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: November 3, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jaehwan Kim, Jongseok Lee, Sunyoung Jeon, Kwangpyo Choi, Minseok Choi, Quockhanh Dinh, Youngo Park
  • Patent number: 10818016
    Abstract: Systems and methods for predictive/reconstructive visual object tracking are disclosed. The visual object tracking has advanced abilities to track objects in scenes, which can have a variety of applications as discussed in this disclosure. In some exemplary implementations, a visual system can comprise a plurality of associative memory units, wherein each associative memory unit has a plurality of layers. The associative memory units can be communicatively coupled to each other in a hierarchical structure, wherein data in associative memory units in higher levels of the hierarchical structure are more abstract than lower associative memory units. The associative memory units can communicate to one another supplying contextual data.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: October 27, 2020
    Assignee: Brain Corporation
    Inventors: Filip Piekniewski, Micah Richert, Dimitry Fisher, Patryk Laurent, Csaba Petre
  • Patent number: 10810744
    Abstract: An image processing device includes an image acquisition means that acquires a target image, which is an image to be processed, an extraction means that extracts a plurality of partial regions from the target image by clustering based on specified color similarity of pixel values, a generation means that generates a plurality of composite images, each of which is composed of one or more partial regions out of the plurality of partial regions, a calculation means that calculates, for each of the composite images, a score indicating a likelihood that a shape formed by the partial region constituting the composite image is a shape of an object to be extracted, and an output means that outputs processing target region information specifying a composite image with the highest score as an object region where the object is shown in the target image.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: October 20, 2020
    Assignee: Rakuten, Inc.
    Inventor: Yeongnam Chae