Neural Networks Patents (Class 382/156)
  • Patent number: 10803565
    Abstract: An example apparatus for imaging in low-light environments includes a raw sensor data receiver to receive raw sensor data from an imaging sensor. The apparatus also includes a convolutional neural network trained to generate an illuminated image based on the received raw sensor data. The convolutional neural network is trained based on images captured by a sensor similar to the imaging sensor.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: October 13, 2020
    Assignee: Intel Corporation
    Inventors: Chen Chen, Qifeng Chen, Vladlen Koltun
  • Patent number: 10796690
    Abstract: Conversational image editing and enhancement techniques are described. For example, an indication of a digital image is received from a user. Aesthetic attribute scores for multiple aesthetic attributes of the image are generated. A computing device then conducts a natural language conversation with the user to edit the digital image. The computing device receives inputs from the user to refine the digital image as the natural language conversation progresses. The computing device generates natural language suggestions to edit the digital image based on the aesthetic attribute scores as part of the natural language conversation. The computing device provides feedback to the user that includes edits to the digital image based on the series of inputs. The computing device also includes as feedback natural language outputs indicating options for additional edits to the digital image based on the series of inputs and the previous edits to the digital image.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: October 6, 2020
    Assignee: Adobe Inc.
    Inventors: Frieder Ludwig Anton Ganz, Walter Wei-Tuh Chang
  • Patent number: 10785903
    Abstract: Described is a system for determining crop residue fraction. The system includes a color video camera mounted on a mobile platform for generating a two-dimensional (2D) color video image of a scene in front or behind the mobile platform. In operation, the system separates the 2D color video image into three separate one-dimensional (1D) mixture signals for red, green, and blue channels. The three 1D mixture signals are then separated into pure 1D component signals using blind source separation. The 1D component signals are thresholded and converted to 2D binary, pixel-level abundance maps, which can then be integrated to allow the system to determine a total component fractional abundance of crop in the scene. Finally, the system can control a mobile platform, such as a harvesting machine, based on the total component fractional abundance of crop in the scene.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: September 29, 2020
    Assignee: HRL Laboratories, LLC
    Inventor: Yuri Owechko
  • Patent number: 10783398
    Abstract: A method for receiving an image query from a user via a client device is provided. The method includes determining a user personalized data based on a prior user history, generating a synthetic image with a generative tool, based on the image query and the user personalized data, and evaluating a similarity between the synthetic image and a real image in an image database with a discriminative tool. The method also includes providing the synthetic image to the user for selection and storing a user response to the synthetic image in the prior user history. A system and a non-transitory, computer readable medium storing instructions to cause the system to perform the above method are also disclosed.
    Type: Grant
    Filed: October 22, 2018
    Date of Patent: September 22, 2020
    Assignee: Shutterstock, Inc.
    Inventors: Michael Steven Ranzinger, Nicholas Alexander Lineback
  • Patent number: 10778619
    Abstract: A computer-implemented method is described. The method includes a computing system receiving an item of digital content from a user device. The computing system generates one or more labels that indicate attributes of the item of digital content. The computing system also generates one or more conversational replies to the item of digital content based on the one or more labels that indicate attributes of the item of digital content. The method also includes the computing system selecting a conversational reply from among the one or more conversational replies and providing the conversational reply for output to the user device.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: September 15, 2020
    Assignee: GOOGLE LLC
    Inventors: Ibrahim Badr, Aayush Kumar, Goekhan Hasan Bakir, Nils Grimsmo, Bianca Madalina Buisman
  • Patent number: 10776666
    Abstract: An apparatus for diagnosis of a medical image includes a storage having a predetermined size, the storage being configured to store sample frames sampled from among received frames which are received from a medical imaging device; a frame collector configured to, once a reference frame is determined, collect one or more sample frames stored in the storage; and a diagnosis component configured to provide a diagnosis for the reference frame based on diagnostic results associated with the one or more collected sample frames.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: September 15, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ki Yong Lee, Seung Woo Ryu
  • Patent number: 10769198
    Abstract: Disclosed are methods, systems, and non-transitory computer-readable medium for analysis of images including wearable items. For example, a method may include obtaining a first set of images, each of the first set of images depicting a product; obtaining a first set of labels associated with the first set of images; training an image segmentation neural network based on the first set of images and the first set of labels; obtaining a second set of images, each of the second set of images depicting a known product; obtaining a second set of labels associated with the second set of images; training an image classification neural network based on the second set of images and the second set of labels; receiving a query image depicting a product that is not yet identified; and performing image segmentation of the query image and identifying the product in the image by performing image analysis.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: September 8, 2020
    Assignee: CAASTLE, INC.
    Inventors: Yu-Cheng Tsai, Dongming Jiang, Georgiy Goldenberg
  • Patent number: 10755171
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for hiding information using neural networks. One of the methods includes maintaining data mapping each of a plurality of classes to a respective piece of information that may potentially be hidden in a received data item; receiving a new data item; receiving data identifying a first piece of information to be hidden in the new data item; and modifying the new data item to generate a modified data item that, when processed by a neural network configured to classify input data items belonging to one of the plurality of classes, is classified by the neural network as belonging to a first class of the plurality of classes that is mapped to the first piece of information in the maintained data.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: August 25, 2020
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Alexander Mordvintsev
  • Patent number: 10748069
    Abstract: The inventive concepts herein relate to performing block retrieval on a block to be processed of a urine sediment image. The method comprises: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result.
    Type: Grant
    Filed: April 30, 2015
    Date of Patent: August 18, 2020
    Assignee: Siemens Healthcare Diagnostics Inc.
    Inventors: Tian Shen, Juan Xu, XiaoFan Zhang
  • Patent number: 10748013
    Abstract: A method and an apparatus for detecting a road lane are provided. The method includes acquiring a current road image of a road around a vehicle and inputting the current road image into a deep learning model and detecting a road lane region in the current road image based on a result outputted from the deep learning model. The deep learning model includes a first model device and a second model device. The first model device includes at least one first model subdevice which includes a convolutional neural network and a first recurrent neural network, and the second model device includes at least one second model subdevice which includes a deconvolution neural network and a second recurrent neural network.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: August 18, 2020
    Assignee: Neusoft Corporation
    Inventors: Huan Tian, Jun Hu, Shuai Cheng, Wei Liu
  • Patent number: 10740865
    Abstract: A convolution neural network (CNN)-based image processing method and apparatus are provided. The CNN-based image processing method includes identifying whether values of pixels of each of feature maps having a plurality of channels at a first layer are zero, and storing information regarding a result of identifying whether the values of the pixels are zero; writing image feature information of the feature maps at the first layer to an external memory; reading information regarding pixels having values which are not zero among the written image feature information from the external memory based on the information regarding the result of identifying whether the values of the pixels are zero; and performing a feature map operation at a second layer using the read image feature information of the feature maps.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: August 11, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Won-jae Lee, Yong-seok Choi, Min-soo Kim
  • Patent number: 10740593
    Abstract: A method for face recognition by using a multiple patch combination based on a deep neural network is provided. The method includes steps of: a face-recognizing device, (a) if a face image with a 1-st size is acquired, inputting the face image into a feature extraction network, to allow the feature extraction network to generate a feature map by applying convolution operation to the face image with the 1-st size, and to generate multiple features by applying sliding-pooling operation to the feature map, wherein the feature extraction network has been learned to extract a feature using a face image for training having a 2-nd size and wherein the 2-nd size is smaller than the 1-st size; and (b) inputting the multiple features into a learned neural aggregation network, to allow the neural aggregation network to aggregate the multiple features and to output an optimal feature for the face recognition.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 11, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10733484
    Abstract: An approach is provided for dynamic adaptation of an in-vehicle feature detector. The approach involves embedding a feature detection model, precomputed weights for the feature detection model, or a combination thereof in a data layer of map data representing a geographic area from which a training data set was collected to generate the feature detection model, the precomputed weights, or a combination thereof. The approach also involves deploying the feature detection model, the precomputed weights, or a combination thereof to adapt an in-vehicle feature detector based on determining that the in-vehicle feature detector is in the geographic area, plans to travel in the geographic area, or a combination thereof. The in-vehicle feature detector can then use the feature detection model, the precomputed weights, or a combination thereof to process sensor data collected while in the geographic area to detect one or more features.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: August 4, 2020
    Assignee: HERE Global B.V.
    Inventors: Vladimir Shestak, Stephen O'Hara, Nicholas Dronen
  • Patent number: 10713294
    Abstract: A generative adversarial networks-based or GAN-based method for learning cross-domain relations is disclosed. A provided architecture includes two coupled GANs: a first GAN learning a translation of images from domain A to domain B, and a second GAN learning a translation of images from domain B to domain A. A loop formed by the first GAN and the second GAN causes sample images to be reconstructed into an original domain after being translated into a target domain. Therefore, loss functions representing reconstruction losses of the images may be used to train generative models.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: July 14, 2020
    Assignee: SK TELECOM CO., LTD.
    Inventors: Taek Soo Kim, Moon Su Cha, Ji Won Kim
  • Patent number: 10713569
    Abstract: System, methods, and other embodiments described herein relate to improving the generation of realistic images. In one embodiment, a method includes acquiring a synthetic image including identified labels of simulated components within the synthetic image. The synthetic image is a simulated visualization and the identified labels distinguish between the components within the synthetic image. The method includes computing, from the simulated components, translated components that visually approximate real instances of the simulated components by using a generative module comprised of neural networks that are configured to separately generate the translated components. The method includes blending the translated components together to produce a new image from the simulated components of the synthetic image.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: July 14, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: German Ros Sanchez, Adrien D. Gaidon, Kuan-Hui Lee, Jie Li
  • Patent number: 10713754
    Abstract: Remote distribution of multiple neural network models to various client devices over a network can be implemented by identifying a native neural network and remotely converting the native neural network to a target neural network based on a given client device operating environment. The native neural network can be configured for execution using efficient parameters, and the target neural network can use less efficient but more precise parameters.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: July 14, 2020
    Assignee: Snap Inc.
    Inventors: Guohui Wang, Sumant Milind Hanumante, Ning Xu, Yuncheng Li
  • Patent number: 10713522
    Abstract: A method for analyzing images to generate a plurality of output features includes receiving input features of the image and performing Fourier transforms on each input feature. Kernels having coefficients of a plurality of trained features are received and on-the-fly Fourier transforms (OTF-FTs) are performed on the coefficients in the kernels. The output of each Fourier transform and each OTF-FT are multiplied together to generate a plurality of products and each of the products are added to produce one sum for each output feature. Two-dimensional inverse Fourier transforms are performed on each sum.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: July 14, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Mihir Narendra Mody, Manu Mathew, Chaitanya Satish Ghone
  • Patent number: 10706534
    Abstract: The invention provides a method and device for creating a model for classifying a data point in imaging data representing measured intensities, the method comprising: training a model using a first labelled set of imaging data points; determining at least one first image part in the first labelled set which the model incorrectly classifies; generating second image parts similar to at least one image part; further training the model using the second image parts. Preferably the imaging data points and the second image parts comprise 3D data points.
    Type: Grant
    Filed: July 26, 2017
    Date of Patent: July 7, 2020
    Inventors: Scott Anderson Middlebrooks, Henricus Wilhelm van der Heijden, Adrianus Cornelis Koopman
  • Patent number: 10692243
    Abstract: In one embodiment, a system may access an image and generate a feature map for the image using a neural network. The system may identify regions of interest in the feature map. Regional feature maps may be generated for the regions of interest, respectively. Each of the regional feature maps has a first, a second, and a third dimension. The system may generate a first combined regional feature map by combining the regional feature maps. The combined regional feature map has a first, a second, and a third dimension. The system may generate a second combined regional feature map by processing the first combined regional feature map using one or more convolutional layers. The system may generate, for each of the regions of interest, information associated with an object instance based on a portion of the second combined regional feature map associated with that region of interest.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: June 23, 2020
    Assignee: Facebook, Inc.
    Inventors: Peter Vajda, Peizhao Zhang, Fei Yang, Yanghan Wang
  • Patent number: 10685279
    Abstract: Systems and methods include obtaining a set of events, each event in the set of events comprising a time-stamped portion of raw machine data, the raw machine data produced by one or more components within an information technology or security environment and reflects activity within the information technology or security environment. Thereafter, a first neural network is used to automatically identify variable text to extract as a field from the set of events. An indication of the variable text is provided as a field extraction recommendation, for example, to a user device for presentation to a user.
    Type: Grant
    Filed: January 31, 2017
    Date of Patent: June 16, 2020
    Assignee: SPLUNK Inc.
    Inventors: Adam Jamison Oliner, Nghi Huu Nguyen, Jacob Leverich, Zidong Yang
  • Patent number: 10685235
    Abstract: A method can include classifying, using a compressed and specialized convolutional neural network (CNN), an object of a video frame into classes, clustering the object based on a distance of a feature vector of the object to a feature vector of a centroid object of the cluster, storing top-k classes, a centroid identification, and a cluster identification, in response to receiving a query for objects of class X from a specific video stream, retrieving image data for each centroid of each cluster that includes the class X as one of the top-k classes, classifying, using a ground truth CNN (GT-CNN), the retrieved image data for each centroid, and for each centroid determined to be classified as a member of the class X providing image data for each object in each cluster associated with the centroid.
    Type: Grant
    Filed: May 4, 2018
    Date of Patent: June 16, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ganesh Ananthanarayanan, Paramvir Bahl, Peter Bodik, Tsuwang Hsieh, Matthai Philipose
  • Patent number: 10664963
    Abstract: Artistic styles extracted from one or more source images may be applied to one or more target images, e.g., in the form of stylized images and/or stylized video sequences. The extracted artistic style may be stored as a plurality of layers in a neural network, which neural network may be further optimized, e.g., via the fusion of various elements of the network's architectures. An optimized network architecture may be determined for each processing environment in which the network will be applied. The artistic style may be applied to the obtained images and/or video sequence of images using various optimization methods, such as the use of scalars to control the resolution of the unstylized and stylized images, temporal consistency constraints, as well as the use of dynamically adjustable or selectable versions of Deep Neural Networks (DNN) that are responsive to system performance parameters, such as available processing resources and thermal capacity.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: May 26, 2020
    Assignee: Apple Inc.
    Inventors: Francesco Rossi, Xiaohuan C. Wang, Bartlomiej W. Rymkowski, Xiaojin Shi, Marco Zuliani, Alexey Marinichev
  • Patent number: 10659484
    Abstract: In one embodiment, a centralized controller maintains a plurality of hierarchical behavioral modules of a behavioral model, and distributes initial behavioral modules to data plane entities to cause them to apply the initial behavioral modules to data plane traffic. The centralized controller may then receive data from a particular data plane entity based on its having applied the initial behavioral modules to its data plane traffic. The centralized controller then distributes subsequent behavioral modules to the particular data plane entity to cause it to apply the subsequent behavioral modules to the data plane traffic, the subsequent behavioral modules selected based on the previously received data from the particular data plane entity. The centralized controller may then iteratively receive data from the particular data plane entity and distribute subsequently selected behavioral modules until an attack determination is made on the data plane traffic of the particular data plane entity.
    Type: Grant
    Filed: February 19, 2018
    Date of Patent: May 19, 2020
    Assignee: Cisco Technology, Inc.
    Inventors: Saman Taghavi Zargar, Subharthi Paul, Prashanth Patil, Jayaraman Iyer, Hari Shankar
  • Patent number: 10643382
    Abstract: Convolutional Neural Networks are applied to object meshes to allow three-dimensional objects to be analyzed. In one example, a method includes performing convolutions on a mesh, wherein the mesh represents a three-dimensional object of an image, the mesh having a plurality of vertices and a plurality of edges between the vertices, performing pooling on the convolutions of an edge of a mesh, and applying fully connected and loss layers to the pooled convolutions to provide metadata about the three-dimensional object.
    Type: Grant
    Filed: April 4, 2017
    Date of Patent: May 5, 2020
    Assignee: Intel Corporation
    Inventors: Shahar Fleishman, Mark Kliger
  • Patent number: 10643320
    Abstract: Systems and method for generating photorealistic images include training a generative adversarial network (GAN) model by jointly learning a first generator, a first discriminator, and a set of predictors through an iterative process of optimizing a minimax objective. The first discriminator learns to determine a synthetic-to-real image from a real image. The first generator learns to generate the synthetic-to-real image from a synthetic image such that the first discriminator determines the synthetic-to-real image is real. The set of predictors learn to predict at least one of a semantic segmentation labeled data and a privileged information from the synthetic-to-real image based on at least one of a known semantic segmentation labeled data and a known privileged information corresponding to the synthetic image. Once trained, the GAN model may generate one or more photorealistic images using the trained GAN model.
    Type: Grant
    Filed: February 12, 2018
    Date of Patent: May 5, 2020
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kuan-Hui Lee, German Ros, Adrien D. Gaidon, Jie Li
  • Patent number: 10635918
    Abstract: A method for managing a smart database which stores facial images for face recognition is provided. The method includes steps of: a managing device (a) counting specific facial images corresponding to a specific person in the smart database where new facial images are continuously stored, and determining whether a first counted value, representing a count of the specific facial images, satisfies a first set value; and (b) if the first counted value satisfies the first set value, inputting the specific facial images into a neural aggregation network, to generate quality scores of the specific facial images by aggregation of the specific facial images, and, if a second counted value, representing a count of specific quality scores among the quality scores from a highest during counting thereof, satisfies a second set value, deleting part of the specific facial images, corresponding to the uncounted quality scores, from the smart database.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: April 28, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635891
    Abstract: A system to recognize objects in an image includes an object detection network outputs a first hierarchical-calculated feature for a detected object. A face alignment regression network determines a regression loss for alignment parameters based on the first hierarchical-calculated feature. A detection box regression network determines a regression loss for detected boxes based on the first hierarchical-calculated feature. The object detection network further includes a weighted loss generator to generate a weighted loss for the first hierarchical-calculated feature, the regression loss for the alignment parameters and the regression loss of the detected boxes. A backpropagator backpropagates the generated weighted loss.
    Type: Grant
    Filed: June 30, 2018
    Date of Patent: April 28, 2020
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Mostafa El-Khamy, Arvind Yedla, Marcel Nassar, Jungwon Lee
  • Patent number: 10635788
    Abstract: A method for learning an obfuscation network used for concealing original data is provided. The method includes steps of: a learning device instructing the obfuscation network to obfuscate inputted training data, inputting the obfuscated training data into a learning network, and allowing the learning network to apply a network operation to the obfuscated training data and thus to generate 1-st characteristic information, and allowing the learning network to apply a network operation to the inputted training data and thus to generate 2-nd characteristic information, and learning the obfuscation network such that an error is minimized, calculated by referring to part of an error acquired by referring to the 1-st and the 2-nd characteristic information, and an error acquired by referring to a task specific output and its corresponding ground truth, and such that an error is maximized, calculated by referring to the training data and the obfuscated training data.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: April 28, 2020
    Assignee: DEEPING SOURCE INC.
    Inventor: Tae Hoon Kim
  • Patent number: 10628919
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: April 21, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Patent number: 10621697
    Abstract: Embodiments relate to a super-resolution engine that converts a lower resolution input image into a higher resolution output image. The super-resolution engine includes a directional scaler, an enhancement processor, a feature detection processor, a blending logic circuit, and a neural network. The directional scaler generates directionally scaled image data by upscaling the input image. The enhancement processor generates enhanced image data by applying an example-based enhancement, a peaking filter, or some other type of non-neural network image processing scheme to the directionally scaled image data. The feature detection processor determines features indicating properties of portions of the directionally scaled image data. The neural network generates residual values defining differences between a target result of the super-resolution enhancement and the directionally scaled image data. The blending logic circuit blends the enhanced image data with the residual values according to the features.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: April 14, 2020
    Assignee: Apple Inc.
    Inventors: Jim Chen Chou, Chenge Li, Yun Gong
  • Patent number: 10621477
    Abstract: A convolutional neural network system for detecting at least one object in at least one image. The system includes a plurality of object detectors, corresponding to a predetermined image window size in the at least one image. Each object detector is associated with a respective down-sampling ratio with respect to the at least one image. Each object detector includes a respective convolutional neural network and an object classifier coupled with the convolutional neural network. The respective convolutional neural network includes a plurality of convolution layers. The object classifier classifies objects in the image according to the results from the convolutional neural network. Object detectors associated with the same respective down-sampling ratio define at least one group of object detectors. Object detectors in a group of object detectors being associated with common convolution layers.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: April 14, 2020
    Assignee: Ramot at Tel Aviv University Ltd.
    Inventors: Lior Wolf, Assaf Mushinsky
  • Patent number: 10621473
    Abstract: A method for updating an object detecting system to detect objects with untrained classes in real-time is provided. The method includes steps of: (a) the object detecting system, if at least one input image is acquired, instructing a recognizer included therein to generate a specific feature map, and to generate a specific query vector; (b) the object detecting system instructing a similarity determining unit (i) to compare the specific query vector to data vectors, to thereby calculate each of first similarity scores between the specific query vector and each of the data vectors, and (ii) to add a specific partial image to an unknown image DB, if a specific first similarity score is smaller than a first threshold value; (c) the object detecting system, if specific class information is acquired, instructing a short-term update unit to generate a specific short-term update vector, and update the feature fingerprint DB.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 14, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10614582
    Abstract: Methods, apparatus, systems and articles of manufacture of logo recognition in images and videos are disclosed. An example method to detect a specific brand in images and video streams comprises accepting luminance images at a scale in an x direction Sx and a different scale in a y direction Sy in a neural network, and training the neural network with a set of training images for detected features associated with a specific brand.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: April 7, 2020
    Assignee: Gracenote, Inc.
    Inventors: Jose Pio Pereira, Kyle Brocklehurst, Sunil Suresh Kulkarni, Peter Wendt
  • Patent number: 10607359
    Abstract: A system, method, and apparatus to detect bio-mechanical geometry in a scene using machine vision. The invention provides accurate and dynamic data collection using machine learning and vision coupled with augmented reality to continually improve the process with each experience. It does not rely upon external sensors or manual input.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: March 31, 2020
    Inventors: Richard Anthony De Los Santos, Krissie Marie Littman, Kimberly Kate Ferlic, Randolph James Ferlic
  • Patent number: 10607119
    Abstract: Methods and systems for detecting and classifying defects on a specimen are provided. One system includes one or more components executed by one or more computer subsystems. The one or more components include a neural network configured for detecting defects on a specimen and classifying the defects detected on the specimen. The neural network includes a first portion configured for determining features of images of the specimen generated by an imaging subsystem. The neural network also includes a second portion configured for detecting defects on the specimen based on the determined features of the images and classifying the defects detected on the specimen based on the determined features of the images.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: March 31, 2020
    Assignee: KLA-Tencor Corp.
    Inventors: Li He, Mohan Mahadevan, Sankar Venkataraman, Huajun Ying, Hedong Yang
  • Patent number: 10586331
    Abstract: Provided are a diagnosis assisting device, an imaging processing method in the diagnosis assisting device, and a non-transitory storage medium having stored therein a program that facilitate a grasp of a difference in an diseased area to perform a highly precise diagnosis assistance. According to an image processing method in a diagnosis assisting device that diagnoses lesions from a picked-up image, a reference image corresponding to a known first picked-up image relating to lesions is registered in a database, and when a diagnosis assistance is performed by comparing a query image corresponding to an unknown second picked-up image relating to lesions with the reference image in the database, a reference image is created from the reference image by geometric transformation, or a query image is created from the query image by geometric transformation.
    Type: Grant
    Filed: July 5, 2017
    Date of Patent: March 10, 2020
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Kazuhisa Matsunaga
  • Patent number: 10587779
    Abstract: A reproduction color profile correction system includes, a differential colorimetric value calculator which calculates a differential colorimetric value that is a difference between a predicted colorimetric value and a measured colorimetric value, the predicted colorimetric value being a colorimetric value of a predicted reproduction color corresponding to a process ink combination of an ink combination, and a measured colorimetric value being a colorimetric value obtained by measuring a reproduction color in a color chart corresponding to the process ink combination, and a colorimetric value corrector which corrects, by using the differential colorimetric value, a predicted colorimetric value of a reproduction color reproduced by the ink combination.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 10, 2020
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventor: Takaya Tanaka
  • Patent number: 10580149
    Abstract: Devices, systems and methods are disclosed for performing image processing at a camera-level. For example, a camera service may run on top of a camera hardware abstraction layer (HAL) and may be configured to perform image processing such as applying a blurring algorithm, applying a color filter and/or other video effects. An application may pass metadata to the camera service via an application programming interface (API) and the camera service may use the metadata to determine parameters for the image processing. The camera service may apply the blurring algorithm for a first period of time before transitioning to unblurred image data over a second period of time.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: March 3, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Guhan Lakshminarayanan, Alexander Bruce Gentles, Naushirwan Navroze Patuck, Christopher Hong-Wen Tserng, Vinod Kancharla Prasad, Reto Koradi, Kunal Patel
  • Patent number: 10580203
    Abstract: According to one embodiment, a method includes identifying a scene to be rendered, creating a plurality of light scattering tables within the scene, performing a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, determining a homogeneous scattering coefficient for spatially homogeneous media of the scene, and applying to the spatially homogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: March 3, 2020
    Assignee: Activision Publishing, Inc.
    Inventors: Peter-Pike Sloan, Adrien Dubouchet, Derek Nowrouzezahrai
  • Patent number: 10558893
    Abstract: Methods and systems are provided for end-to-end text recognition in digitized documents of handwritten characters over multiple lines without explicit line segmentation. An image is received. Based on the image, one or more feature maps are determined. Each of the one or more feature maps include one or more feature vectors. Based at least in part on the one or more feature maps, one or more scalar scores are determined. Based on the one or more scalar scores, one or more attention weights are determined. By applying the one or more attention weights to each of the one or more feature vectors, one or more image summary vectors are determined. Based at least in part on the one or more image summary vectors, one or more handwritten characters are determined.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: February 11, 2020
    Assignee: A2IA S.A.S.
    Inventor: Theodore Damien Christian Bluche
  • Patent number: 10558750
    Abstract: The technology disclosed presents a novel spatial attention model that uses current hidden state information of a decoder long short-term memory (LSTM) to guide attention and to extract spatial image features for use in image captioning. The technology disclosed also presents a novel adaptive attention model for image captioning that mixes visual information from a convolutional neural network (CNN) and linguistic information from an LSTM. At each timestep, the adaptive attention model automatically decides how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word. The technology disclosed further adds a new auxiliary sentinel gate to an LSTM architecture and produces a sentinel LSTM (Sn-LSTM). The sentinel gate produces a visual sentinel at each timestep, which is an additional representation, derived from the LSTM's memory, of long and short term visual and linguistic information.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: February 11, 2020
    Assignee: salesforce.com, inc.
    Inventors: Jiasen Lu, Caiming Xiong, Richard Socher
  • Patent number: 10546242
    Abstract: A method includes determining object class probabilities of pixels in a first input image by examining the first input image in a forward propagation direction through layers of artificial neurons of an artificial neural network. The object class probabilities indicate likelihoods that the pixels represent different types of objects in the first input image.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: January 28, 2020
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Arpit Jain, Swaminathan Sankaranarayanan, David Scott Diwinsky, Ser Nam Lim, Kari Thompson
  • Patent number: 10546231
    Abstract: Synthesizing a neural network from a plurality of component neural networks is disclosed. The method comprises mapping each component network to a respective graph node where each node is first labelled in accordance with the structure of a corresponding layer of the component network and a distance of the node from one of a given input or output. The graphs for each component network are merged into a single merged graph by merging nodes from component network graphs having the same first structural label. Each node of the merged graph is second labelled in accordance with the structure of the corresponding layer of the component network and a distance of the node from the other of a given input or output. The merged graph is contracted by merging nodes of the merged graph having the same second structural label. The contracted-merged graph is mapped to a synthesized neural network.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: January 28, 2020
    Assignee: FotoNation Limited
    Inventors: Shabab Bazrafkan, Joe Lemley
  • Patent number: 10540587
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a convolutional neural network (CNN). The system includes a plurality of workers, wherein each worker is configured to maintain a respective replica of each of the convolutional layers of the CNN and a respective disjoint partition of each of the fully-connected layers of the CNN, wherein each replica of a convolutional layer includes all of the nodes in the convolutional layer, and wherein each disjoint partition of a fully-connected layer includes a portion of the nodes of the fully-connected layer.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: January 21, 2020
    Assignee: Google LLC
    Inventor: Alexander Krizhevsky
  • Patent number: 10535133
    Abstract: A computer-implemented method, system and non-transitory computer readable storage medium for classifying a region of interest of a subject, including receiving imaging data comprising at least one image element, the imaging data comprising the region of interest of the subject; providing a plurality of atlases, each of the plurality of atlases having a candidate region that corresponds to the region of interest of the imaging data, each of the plurality of atlases having at least one image element with associated location and property information; co-registering the plurality of atlases to the imaging data, using at least one processor; assigning a probability to generate a labeling parameter for the region of interest, the probability being associated with each atlas; and classifying the region of interest of the subject based on the assigning.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: January 14, 2020
    Assignee: The Johns Hopkins University
    Inventors: Michael I. Miller, Susumu Mori, Xiaoying Tang
  • Patent number: 10510153
    Abstract: Devices, systems and methods are disclosed for performing image processing at a camera-level. For example, a camera service may run on top of a camera hardware abstraction layer (HAL) and may be configured to perform image processing such as applying a blurring algorithm, applying a color filter and/or other video effects. An application may pass metadata to the camera service via an application programming interface (API) and the camera service may use the metadata to determine parameters for the image processing. The camera service may apply the blurring algorithm for a first period of time before transitioning to unblurred image data over a second period of time.
    Type: Grant
    Filed: June 26, 2017
    Date of Patent: December 17, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Alexander Bruce Gentles, Naushirwan Navroze Patuck, Christopher Hong-Wen Tserng, Vinod Kancharla Prasad, Reto Koradi, Guhan Lakshminarayanan, Kunal Patel
  • Patent number: 10489688
    Abstract: Techniques and systems are described to determine personalized digital image aesthetics in a digital medium environment. In one example, a personalized offset is generated to adapt a generic model for digital image aesthetics. A generic model, once trained, is used to generate training aesthetics scores from a personal training data set that corresponds to an entity, e.g., a particular user, group of users, and so on. The image aesthetics system then generates residual scores (e.g., offsets) as a difference between the training aesthetics score and the personal aesthetics score for the personal training digital images. The image aesthetics system then employs machine learning to train a personalized model to predict the residual scores as a personalized offset using the residual scores and personal training digital images.
    Type: Grant
    Filed: July 24, 2017
    Date of Patent: November 26, 2019
    Assignee: Adobe Inc.
    Inventors: Xiaohui Shen, Zhe Lin, Radomir Mech, Jian Ren
  • Patent number: 10482331
    Abstract: Methods and systems are provided for detecting an object. In one embodiment, a method includes: receiving, by a processor, image data from an image sensor; receiving, by a processor, radar data from a radar system; processing, by the processor, the image data from the image sensor and the radar data from the radar system using a deep learning method; and detecting, by the processor, an object based on the processing.
    Type: Grant
    Filed: November 16, 2016
    Date of Patent: November 19, 2019
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventor: Shuqing Zeng
  • Patent number: 10467507
    Abstract: An image quality assessment solution analyzes an image quality and a correlation of an image to an item description associated with the item. The content quality assessment may assign a quality score to the image based on a composition of the image and/or correlation with the image description. The score may be based on a model that is trained to analyze images using a learning model. Based on the image score, a correlation score, or other scores, the user may be given feedback on how to improve an image. A service provider providing this service may use the score to influence recommendation results that use the images.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: November 5, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Xiang Hao, Yi Sun
  • Patent number: 10467508
    Abstract: Font recognition and similarity determination techniques and systems are described. In a first example, localization techniques are described to train a model using machine learning (e.g., a convolutional neural network) using training images. The model is then used to localize text in a subsequently received image, and may do so automatically and without user intervention, e.g., without specifying any of the edges of a bounding box. In a second example, a deep neural network is directly learned as an embedding function of a model that is usable to determine font similarity. In a third example, techniques are described that leverage attributes described in metadata associated with fonts as part of font recognition and similarity determinations.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Luoqi Liu, Hailin Jin