Network Learning Techniques (e.g., Back Propagation) Patents (Class 382/157)
  • Patent number: 10387753
    Abstract: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (1) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating each of pixels, per each of ROIs, in corresponding locations on pooled ROI feature maps; and (2) (i) instructing a second transposing layer or a classifying layer to divide an adjusted feature map, whose volume is adjusted from the integrated feature map, by each of the pixels, and instructing the classifying layer to generate object information on the ROIs, and (ii) backpropagating object losses. Size of a chip can be decreased as convolution operations and fully connected layer operations are performed by a same processor. Accordingly, there are advantages such as no need to build additional lines in a semiconductor manufacturing process.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10380788
    Abstract: The innovation describes and discloses systems and methods related to deep neural networks employing machine learning to detect item 2D landmark points from a single image, such as those of an image of a face, and to estimate their 3D coordinates and shape rapidly and accurately. The system also provides for mapping by a feed-forward neural network that defines two criteria, one to learn to detect important shape landmark points on the image and another to recover their depth information. An aspect of the innovation may utilize camera models in a data augmentation approach that aids machine learning of a complex, non-linear mapping function. Other augmentation approaches are also considered.
    Type: Grant
    Filed: October 12, 2017
    Date of Patent: August 13, 2019
    Assignee: OHIO STATE INNOVATION FOUNDATION
    Inventor: Aleix Martinez
  • Patent number: 10380480
    Abstract: In an example embodiment, for each of one or more input documents: a first value is determined for the first metric for a first transformation of the input document by passing the first transformation to s first Deep Convolutional Neural Network (DCNN), a second transformation of the input document is determined by passing the input document to a second DCNN, the second transformation of the input document is passed to the first DCNN, obtaining a second value for the first metric for the second transformation of the input document, the first and second transformations being of the first transformation type, and a difference between the first value and the second value for the input document is determined. Then it is determined whether to change the system over from the first DCNN to the second DCNN based on the difference between the first value and the second value.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: August 13, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Uri Merhav, Dan Shacham
  • Patent number: 10373027
    Abstract: A method for acquiring a sample image for label-inspecting among auto-labeled images for learning a deep learning network, optimizing sampling processes for manual labeling, and reducing annotation costs is provided. The method includes steps of: a sample image acquiring device, generating a first and a second images, instructing convolutional layers to generate a first and a second feature maps, instructing pooling layers to generate a first and a second pooled feature maps, and generating concatenated feature maps; instructing a deep learning classifier to acquire the concatenated feature maps, to thereby generate class information; and calculating probabilities of abnormal class elements in an abnormal class group, determining whether the auto-labeled image is a difficult image, and selecting the auto-labeled image as the sample image for label-inspecting. Further, the method can be performed by using a robust algorithm with multiple transform pairs.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: August 6, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10366429
    Abstract: Disclosed herein are methods for providing browser payment request application programming interface for simplifying a payment process on a site. The method includes presenting, on a graphical user interface managed by a browser, a presentation, the presentation being received from a site over a network, receiving, via the user interface and from a user, an interaction with the presentation, receiving, at the browser and via a browser payment request application programming interface that manages communication of data between the site and the browser for processing a payment, a request from the site for payment data for the user and transmitting, to the site and via the browser payment request application programming interface, the payment data, wherein the payment data can be used to process a payment.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: July 30, 2019
    Assignee: MONTICELLO ENTERPRISES LLC
    Inventors: Thomas M. Isaacson, Ryan C. Durham
  • Patent number: 10353271
    Abstract: A depth estimation method for a monocular image based on a multi-scale CNN and a continuous CRF is disclosed in this invention. A CRF module is adopted to calculate a unary potential energy according to the output depth map of a DCNN, and the pairwise sparse potential energy according to input RGB images. MAP (maximum a posteriori estimation) algorithm is used to infer the optimized depth map at last. The present invention integrates optimization theories of the multi-scale CNN with that of the continuous CRF. High accuracy and a clear contour are both achieved in the estimated depth map; the depth estimated by the present invention has a high resolution and detailed contour information can be kept for all objects in the scene, which provides better visual effects.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: July 16, 2019
    Assignee: ZHEJIANG GONGSHANG UNIVERSITY
    Inventors: Xun Wang, Leqing Zhu, Huiyan Wang
  • Patent number: 10346680
    Abstract: An imaging apparatus is provided. The imaging apparatus includes an imager configured to record an external image, a region determiner configured to divide a recorded previous image frame and a recorded current image frame into pluralities of regions, calculate a moving direction and a distance of each of the plurality of regions in the current image frame corresponding to each of the plurality of regions in the previous image frame, and determine a background and an object by applying a preset background model and object model based on the calculated moving direction and distance, and a posture determiner configured to determine body parts of the determined object based on a body part model, and determine a posture of the object by combining the determined body parts.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: July 9, 2019
    Assignees: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATION
    Inventors: Tae-gyu Lim, Bo-hyung Han, Joon-hee Han, Seung-hoon Hong, Han-tak Kwak, Sung-bum Park, Woo-sung Shim
  • Patent number: 10346693
    Abstract: A method of attention-based lane detection without post-processing by using a lane mask is provided. The method includes steps of: a learning device instructing a CNN to acquire a final feature map which has been generated by applying convolution operations to an image, a segmentation score map, and an embedded feature map which have been generated by using the final feature map; instructing a lane masking layer to recognize lane candidates, generate the lane mask, and generate a masked feature map; instructing a convolutional layer to generate a lane feature map; instructing a first FC layer to generate a softmax score map and a second FC layer to generate lane parameters; and backpropagating loss values outputted from a multinomial logistic loss layer and a line fitting loss layer, to thereby learn parameters of the FC layers, and the convolutional layer. Thus, lanes at distance can be detected more accurately.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 9, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325352
    Abstract: There is provided a method for transforming convolutional layers of a CNN including m convolutional blocks to optimize CNN parameter quantization to be used for mobile devices, compact networks, and the like with high precision via hardware optimization. The method includes steps of: a computing device (a) generating k-th quantization loss values by referring to k-th initial weights of a k-th initial convolutional layer included in a k-th convolutional block, a (k?1)-th feature map outputted from the (k?1)-th convolutional block, and each of k-th scaling parameters; (b) determining each of k-th optimized scaling parameters by referring to the k-th quantization loss values; (c) generating a k-th scaling layer and a k-th inverse scaling layer by referring to the k-th optimized scaling parameters; and (d) transforming the k-th initial convolutional layer into a k-th integrated convolutional layer by using the k-th scaling layer and the (k?1)-th inverse scaling layer.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10297029
    Abstract: A system and method involving performing image classification on the image according to a position of a subject in the image; selecting, from a plurality of subject position templates, a subject position template for the image according to a result of the image classification, wherein each of the plurality of subject position templates is associated with a pre-defined position parameter, and each of the plurality of subject position templates is configured with a weight distribution field according to the pre-defined position parameter, the weight distribution field representing a probability that each pixel in the image belongs to a foreground or a background; and performing image segmentation according to the weight distribution field in the selected subject position template to segment the subject from the image.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: May 21, 2019
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventor: Hailue Lin
  • Patent number: 10290085
    Abstract: Image hole filling that account for global structure and local texture. One exemplary technique involves using both a content neural network and a texture neural network. The content neural network is trained to encode image features based on non-hole image portions and decode the image features to fill holes. The texture neural network is trained to extract image patch features that represent texture. The exemplary technique receives an input image that has a hole and uses the two neural networks to fill the hole and provide a result image. This is accomplished by selecting pixel values for the hole based on a content constraint that uses the content neural network to account for global structure and a texture constraint that uses the texture neural network to account for local texture. For example, the pixel values can be selected by optimizing a loss function that implements the constraints.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: May 14, 2019
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xin Lu, Chao Yang
  • Patent number: 10282660
    Abstract: Methods and apparatus are provided for identifying environmental stimuli in an artificial nervous system using both spiking onset and spike counting. One example method of operating an artificial nervous system generally includes receiving a stimulus; generating, at an artificial neuron, a spike train of two or more spikes based at least in part on the stimulus; identifying the stimulus based at least in part on an onset of the spike train; and checking the identified stimulus based at least in part on a rate of the spikes in the spike train. In this manner, certain aspects of the present disclosure may respond with short response latencies and may also maintain accuracy by allowing for error correction.
    Type: Grant
    Filed: May 16, 2014
    Date of Patent: May 7, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Victor Hokkiu Chan, Ryan Michael Carey
  • Patent number: 10268950
    Abstract: A disclosed face detection system (and method) is based on a structure of a convolutional neural network (CNN). One aspect concerns a method for automatically training a CNN for face detection. The training is performed such that balanced number of face images and non-face images are used for training by deriving additional face images from the face images. The training is also performed by adaptively changing a number of trainings of a stage according to automatic stopping criteria. Another aspect concerns a system for performing image detection by integrating data at different scales (i.e., different image extents) for better use of data in each scale. The system may include CNNs automatically trained using the method disclosed herein.
    Type: Grant
    Filed: November 15, 2014
    Date of Patent: April 23, 2019
    Assignee: BEIJING KUANGSHI TECHNOLOGY CO., LTD.
    Inventors: Qi Yin, Zhimin Cao, Kai Jia
  • Patent number: 10262237
    Abstract: Technologies for multi-scale object detection include a computing device including a multi-layer convolution network and a multi-scale region proposal network (RPN). The multi-layer convolution network generates a convolution map based on an input image. The multi-scale RPN includes multiple RPN layers, each with a different receptive field size. Each RPN layer generates region proposals based on the convolution map. The computing device may include a multi-scale object classifier that includes multiple region of interest (ROI) pooling layers and multiple associated fully connected (FC) layers. Each ROI pooling layer has a different output size, and each FC layer may be trained for an object scale based on the output size of the associated ROI pooling layer. Each ROI pooling layer may generate pooled ROIs based on the region proposals and each FC layer may generate object classification vectors based on the pooled ROIs. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: April 16, 2019
    Assignee: Intel Corporation
    Inventors: Byungseok Roh, Kye-Hyeon Kim, Sanghoon Hong, Minje Park, Yeongjae Cheon
  • Patent number: 10255910
    Abstract: Deep Neural Networks (DNN) are time shifted relative to one another and trained. The time-shifted networks may then be combined to improve recognition accuracy. The approach is based on an automatic speech recognition (ASR) system using DNN and using time shifted features. Initially, a regular ASR model is trained to produce a first trained DNN. Then a top layer (e.g., SoftMax layer) and the last hidden layer (e.g., Sigmoid) are fine-tuned with same data set but with a feature window left- and right-shifted to create respective second and third left-shifted and right-shifted DNNs. From these three DNN networks, four combination networks may be generated: left- and right-shifted, left-shifted and centered, centered and right-shifted, and left-shifted, centered, and right-shifted. The centered networks are used to perform the initial (first-pass) ASR. Then the other six networks are used to perform rescoring.
    Type: Grant
    Filed: September 18, 2017
    Date of Patent: April 9, 2019
    Assignee: AppTek, Inc.
    Inventors: Mudar Yaghi, Hassan Sawaf, Jinato Jiang
  • Patent number: 10255663
    Abstract: According to one embodiment, an image processing device includes a storage and an image processor. The storage stores therein an input image. The image processor segments the input image into a plurality of regions by using a first convolutional neural network (CNN), generates a first image by converting pixel values of pixels in a first region included in the regions into a first value, and performs image processing on the first image by using a second CNN to generate a second image.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: April 9, 2019
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Junji Otsuka, Takuma Yamamoto, Nao Mishima
  • Patent number: 10241583
    Abstract: Embodiments of the present disclosure provide techniques and configurations for an apparatus to determine a command to the apparatus, based on vibration patterns. In one instance, the apparatus may include a body with at least one surface to receive one or more user inputs; at least one sensor disposed to be in contact with the body to detect vibration manifested by the surface in response to the user input, and generate a signal indicative of vibration detected; and a controller coupled with the sensor, to process the vibration-indicative signal, to identify a vibration pattern, and determine a command based at least in part on the vibration pattern, based at least in part on a result of the process of the signal. The command may be provided to interact, operate, or control the apparatus. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventors: Paulo Lopez Meyer, Hector Alfonso Cordourier Maruri, Julio Cesar Zamora Esquivel, Alejandro Ibarra Von Borstel, Jose Rodrigo Camacho Perez, Jorge Carlos Romero Aragon
  • Patent number: 10223614
    Abstract: A learning method for detecting at least one lane based on a convolutional neural network (CNN) is provided. The learning method includes steps of: (a) a learning device obtaining encoded feature maps, and information on lane candidate pixels in a input image; (b) the learning device, classifying a first parts of the lane candidate pixels, whose probability scores are not smaller than a predetermined threshold, as strong line pixels, and classifying the second parts of the lane candidate pixels, whose probability scores are less than the threshold but not less than another predetermined threshold, as weak lines pixels; and (c) the learning device, if distances between the weak line pixels and the strong line pixels are less than a predetermined distance, classifying the weak line pixels as pixels of additional strong lines, and determining that the pixels of the strong line and the additional correspond to pixels of the lane.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: March 5, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10210430
    Abstract: A method for extracting hierarchical features from data defined on a geometric domain is provided. The method includes applying on said data at least an intrinsic convolution layer, including the steps of applying a patch operator to extract a local representation of the input data around a point on the geometric domain and outputting the correlation of a patch resulting from the extraction with a plurality of templates. A system to implement the method is also described.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: February 19, 2019
    Assignee: Fabula AI Limited
    Inventors: Michael Bronstein, Davide Boscaini, Federico Monti
  • Patent number: 10198629
    Abstract: A method for cropping photos images captured by a user from an image of a page of a photo album is described. Corners in the page image are detected using corner detection algorithm or by detecting intersections of line-segments (and their extensions) in the image using edge, corner, or line detection techniques. Pairs of the detected corners are used to define all potential quads, which are then are qualified according to various criteria. A correlation matrix is generated for each potential pair of the qualified quads, and candidate quads are selected based on the Eigenvector of the correlation matrix. The content of the selected quads is checked using a salience map that may be based on a trained neuron network, and the resulting photos images are extracted as individual files for further handling or manipulation by the user.
    Type: Grant
    Filed: March 18, 2018
    Date of Patent: February 5, 2019
    Assignee: PHOTOMYNE LTD.
    Inventors: Yair Segalovitz, Omer Shoor, Yaron Lipman, Nir Tzemah, Natalie Verter
  • Patent number: 10152676
    Abstract: Features are disclosed for distributing the training of models over multiple computing nodes (e.g., servers or other computing devices). Each computing device may include a separate copy of the model to be trained, and a subset of the training data to be used. A computing device may determine updates for parameters of the model based on processing of a portion of the training data. A portion of those updates may be selected for application to the model and synchronization with other computing devices. In some embodiments, the portion of the updates is selected based on a threshold value. Other computing devices can apply the received portion of the updates such that the copy of the model being trained in each individual computing device may be substantially synchronized, even though each computing device may be using a different subset of training data to train the model.
    Type: Grant
    Filed: November 22, 2013
    Date of Patent: December 11, 2018
    Assignee: Amazon Technologies, Inc.
    Inventor: Nikko Strom
  • Patent number: 10152673
    Abstract: Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforward-feedback loop, filling-in the missing hidden layer activity with meaningful representations.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: December 11, 2018
    Assignee: ASELSAN ELEKTRONIK SANAYI VE TICARET ANONIM SIRKETI
    Inventors: Ozgur Yilmaz, Huseyin Ozkan
  • Patent number: 10135723
    Abstract: A method (and system) for supervised network clustering includes receiving and reading node labels from a plurality of nodes on a network, as executed by a processor on a computer having access to the network, the network defined as a group of entities interconnected by links. The node labels are used to define densities associated with the nodes. Node components are extracted from the network, based on using thresholds on densities. Smaller components having a size below a user-defined threshold are merged.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: November 20, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Charu C. Aggarwal
  • Patent number: 10133964
    Abstract: A method for training a system for reconstructing a magnetic resonance image includes: under-sampling image data from each of a plurality of fully-sampled images; and inputting the under-sampled image data to a multi-scale neural network comprising sequentially connected layers. Each layer has an input for receiving input image data and an output for outputting reconstructed image data. Each layer performs a process comprising: decomposing the array of input image data; applying a thresholding function to the decomposed image data, to form a shrunk data, the thresholding function outputting a value asymptotically approaching one when the thresholding function receives an input having a magnitude greater than a first value, reconstructing the shrunk data for combining with a reconstructed image data output by another one of the layers to form updated reconstructed image data, and machine-learning at least one parameter of the decomposing, the thresholding function, or the reconstructing.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: November 20, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Yi Guo, Boris Mailhe, Xiao Chen, Mariappan S. Nadar
  • Patent number: 10127659
    Abstract: Methods and apparatus for improved deep learning for image acquisition are provided. An imaging system configuration apparatus includes a training learning device including a first processor to implement a first deep learning network (DLN) to learn a first set of imaging system configuration parameters based on a first set of inputs from a plurality of prior image acquisitions to configure at least one imaging system for image acquisition, the training learning device to receive and process feedback including operational data from the plurality of image acquisitions by the at least one imaging system. The example apparatus includes a deployed learning device including a second processor to implement a second DLN, the second DLN generated from the first DLN of the training learning device, the deployed learning device configured to provide a second imaging system configuration parameter to the imaging system in response to receiving a second input for image acquisition.
    Type: Grant
    Filed: November 23, 2016
    Date of Patent: November 13, 2018
    Assignee: General Electric Company
    Inventors: Jiang Hsieh, Gopal Avinash, Saad Sirohey
  • Patent number: 10102448
    Abstract: An image recognition (IR) computing device is provided herein, the IR computing device configured to receive a search request from a user computing device instructing the IR computing device to conduct a search for a subject clothing item pictured in a subject image, the search request including the subject image. The IR computing device is further configured to analyze the subject image and compare a plurality of vendor images to the subject image. The IR computing device may employ an object recognition component to analyze and compare the images. The IR computing device is further configured to generate a list of potential matches to the subject image, transmit the list of potential matches to the user computing device for display to a user within a clothing match app, and receive an indication from the user computing device of whether the search was successful or unsuccessful.
    Type: Grant
    Filed: October 16, 2015
    Date of Patent: October 16, 2018
    Assignee: EHDP Studios, LLC
    Inventors: Elizabeth Hill, Douglas Peterson
  • Patent number: 10089577
    Abstract: In an example, a circuit of a neural network implemented in an integrated circuit (IC) includes a layer of hardware neurons, the layer including a plurality of inputs, a plurality of outputs, a plurality of weights, and a plurality of threshold values, each of the hardware neurons including: a logic circuit having inputs that receive first logic signals from at least a portion of the plurality of inputs and outputs that supply second logic signals corresponding to an exclusive NOR (XNOR) of the first logic signals and at least a portion of the plurality of weights; a counter circuit having inputs that receive the second logic signals and an output that supplies a count signal indicative of the number of the second logic signals having a predefined logic state; and a compare circuit having an input that receives the count signal and an output that supplies a logic signal having a logic state indicative of a comparison between the count signal and a threshold value of the plurality of threshold values; wherein
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: October 2, 2018
    Assignee: XILINX, INC.
    Inventors: Yaman Umuroglu, Michaela Blott
  • Patent number: 9959615
    Abstract: A system and method for detecting pulmonary embolisms in a subject's vasculature are provided. In some aspects, the method includes acquiring a set of images representing a vasculature of the subject, and analyzing the set of images to identify pulmonary embolism candidates associated with the vasculature. The method also includes generating, for identified pulmonary embolism candidates, image patches based on a vessel-aligned image representation, and applying a set of convolutional neural networks to the generated image patches to identify pulmonary embolisms. The method further includes generating a report indicating identified pulmonary embolisms.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: May 1, 2018
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Jianming Liang, Nima Tajbakhsh
  • Patent number: 9916666
    Abstract: An image processing apparatus includes: an imaging distance estimating unit configured to estimate an imaging distance to a subject shown in an image; an examination region setting unit configured to set an examination region in the image such that an index indicating a spread of a distribution of imaging distances to the subject shown in the examination region is within a given range; and an abnormal structure identifying unit configured to identify whether or not a microstructure of the subject shown in the examination region is abnormal, by using texture feature data that enables identification of an abnormality in the microstructure of the subject shown in the examination region, the texture feature data being specified according the examination region.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: March 13, 2018
    Assignee: OLYMPUS CORPORATION
    Inventors: Yamato Kanda, Makoto Kitamura, Takashi Kono, Masashi Hirota, Toshiya Kamiyama
  • Patent number: 9916531
    Abstract: An apparatus is described herein. The apparatus comprises an accumulator, a controller, and a convolutional neural network. The accumulator is to accumulate a plurality of values within a predetermined bit width. The controller is to determine a parameter quantization and a data quantization. The convolutional neural network is adapted to the data quantization, wherein a quantization point is selected based on the parameter quantization, data quantization, and accumulator bit width.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: March 13, 2018
    Assignee: Intel Corporation
    Inventors: Zoran Zivkovic, Barry de Bruin
  • Patent number: 9892344
    Abstract: Tasks such as object classification from image data can take advantage of a deep learning process using convolutional neural networks. These networks can include a convolutional layer followed by an activation layer, or activation unit, among other potential layers. Improved accuracy can be obtained by using a generalized linear unit (GLU) as an activation unit in such a network, where a GLU is linear for both positive and negative inputs, and is defined by a positive slope, a negative slope, and a bias. These parameters can be learned for each channel or a block of channels, and stacking those types of activation units can further improve accuracy.
    Type: Grant
    Filed: November 30, 2015
    Date of Patent: February 13, 2018
    Assignee: A9.COM, INC.
    Inventors: Son Dinh Tran, Raghavan Manmatha
  • Patent number: 9875531
    Abstract: The present invention relates to medical image viewing in relation with navigation in X-ray imaging. In order to provide improved X-ray images, for example for cardiac procedures, allowing a facilitated perception while ensuring that increased details are visible, a medical image viewing device (10) for navigation in X-ray imaging is provided that comprises an image data providing unit (12), a processing unit (14), and a display unit (16). The image data providing unit is configured to provide an angiographic image of a region of interest of an object. The processing unit is configured to identify a suppression area for partial bone suppression within the angiographic image, and to identify and locally suppress predetermined bone structures in the angiographic image in the suppression area, and to generate a partly-bone-suppressed image. Further, the display unit is configured to display the partly-bone-suppressed image.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: January 23, 2018
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Andre Goossen, Raoul Florent, Claire Levrier, Jens Von Berg
  • Patent number: 9852006
    Abstract: Embodiments of the invention relate to a neural network circuit comprising a memory block for maintaining neuronal data for multiple neurons, a scheduler for maintaining incoming firing events targeting the neurons, and a computational logic unit for updating the neuronal data for the neurons by processing the firing events. The network circuit further comprises at least one permutation logic unit enabling data exchange between the computational logic unit and at least one of the memory block and the scheduler. The network circuit further comprises a controller for controlling the computational logic unit, the memory block, the scheduler, and each permutation logic unit.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: December 26, 2017
    Assignee: International Business Machines Corporation
    Inventors: Filipp A. Akopyan, Rodrigo Alvarez-Icaza Rivera, John V. Arthur, Andrew S. Cassidy, Bryan L. Jackson, Paul A. Merolla, Dharmendra S. Modha, Jun Sawada
  • Patent number: 9800780
    Abstract: [Object] To obtain more useful images when images taken using a fisheye lens are used without being remapped. [Solution] Provided is an image processing device including: an image acquisition unit that acquires taken images taken in chronological succession via a fisheye lens; a vector acquisition unit that acquires motion vectors from the taken images; and a point detection unit that detects a point of origin or a point of convergence of the motion vectors.
    Type: Grant
    Filed: March 28, 2014
    Date of Patent: October 24, 2017
    Assignee: SONY CORPORATION
    Inventor: Kazuteru Matsumoto
  • Patent number: 9760807
    Abstract: A method and apparatus for automatically performing medical image analysis tasks using deep image-to-image network (DI2IN) learning. An input medical image of a patient is received. An output image that provides a result of a target medical image analysis task on the input medical image is automatically generated using a trained deep image-to-image network (DI2IN). The trained DI2IN uses a conditional random field (CRF) energy function to estimate the output image based on the input medical image and uses a trained deep learning network to model unary and pairwise terms of the CRF energy function. The DI2IN may be trained on an image with multiple resolutions. The input image may be split into multiple parts and a separate DI2IN may be trained for each part. Furthermore, the multi-scale and multi-part schemes can be combined to train a multi-scale multi-part DI2IN.
    Type: Grant
    Filed: December 16, 2016
    Date of Patent: September 12, 2017
    Assignee: Siemens Healthcare GmbH
    Inventors: S. Kevin Zhou, Dorin Comaniciu, Bogdan Georgescu, Yefeng Zheng, David Liu, Daguang Xu
  • Patent number: 9595117
    Abstract: A vessel suppression process is performed on RGB image data. A display of capillary vessels is suppressed by the vessel suppression process. After the vessel suppression process, tone of the RGB image data is reversed. Thereby, a suppressed-and-reversed image is produced. Even after the tone reversal, the capillary vessels do not interfere with observation of a ductal structure in the suppressed-and-reversed image, because the display of the capillary vessels is suppressed. In the suppressed-and-reversed image, the ductal structure is darker than a mucous membrane due to the tone reversal, so that the color of the ductal structure is close to that of an indigo.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: March 14, 2017
    Assignee: FUJIFILM Corporation
    Inventor: Masahiro Kubo
  • Patent number: 9581726
    Abstract: A system and method for determination of importance of attributes among a plurality of attribute importance models incorporating a segmented attribute kerneling (SAK) method of attribute importance determination. The method permits operation of multiple attribute importance algorithms simultaneously, finds the intersecting subset of important attributes across the multiple techniques, and then outputs a consolidated ranked set. In addition, the method identifies and presents a ranked subset of the attributes excluded from the union.
    Type: Grant
    Filed: December 5, 2013
    Date of Patent: February 28, 2017
    Assignee: LANDMARK GRAPHICS CORPORATION
    Inventors: Keshava P Rangarajan, Serkan Dursun, Amit Kumar Singh
  • Patent number: 9519049
    Abstract: Methods and apparatus to receive radar pulses, process the received pulses using weighted finite state machine to learn a model of an unknown emitter generating the received radar pulses, and estimate a state/function of the unknown emitter based on the received radar pulses using the learned model, and predict the next state/function of the unknown emitter based on the received radar pulses and applying maximum likelihood estimation.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: December 13, 2016
    Assignee: Raytheon Company
    Inventors: Shubha Kadambe, Alexander Niechayev, Ted Y. Lumanlan
  • Patent number: 9430697
    Abstract: The present invention provides a face recognition method. The method includes obtaining a plurality of training face images which belongs to a plurality of face classes and obtaining a plurality of training dictionaries corresponding to the training face images. A face class includes one or more training face images. The training dictionaries include a plurality of deep feature matrices. The method further includes obtaining an input face image. The input face image is partitioned into a plurality of blocks, whose corresponding deep feature vectors are extracted using a deep learning network. A collaborative representation model is applied to represent the deep feature vectors with the training dictionaries and representation vectors. A summation of errors for all blocks corresponding to a face class is computed as a residual error for the face class. The input face image is classified by selecting the face class that yields a minimum residual error.
    Type: Grant
    Filed: July 3, 2015
    Date of Patent: August 30, 2016
    Assignee: TCL RESEARCH AMERICA INC.
    Inventors: Michael Iliadis, Armin Kappeler, Haohong Wang
  • Patent number: 9424489
    Abstract: Novel methods and systems for automated data analysis are disclosed. Data can be automatically analyzed to determine features in different applications, such as visual field analysis and comparisons. Anomalies between groups of objects may be detected through clustering of objects.
    Type: Grant
    Filed: July 28, 2015
    Date of Patent: August 23, 2016
    Assignee: CALIFORNIA INSTITUTE OF TECHNOLOGY
    Inventor: Wolfgang Fink
  • Patent number: 9418303
    Abstract: A traffic sign recognition method analyzes and classifies image data of a sensor in an information processing unit. The image data is analyzed to select an image portion judged to contain a traffic sign of a particular sign class. A class-specific feature is identified in the image portion. A modified image portion is created, in which the class-specific feature has been shifted to a center of the modified image portion. Then the modified image portion is evaluated by a learning-based algorithm to recognize the traffic sign of the particular sign class.
    Type: Grant
    Filed: September 24, 2010
    Date of Patent: August 16, 2016
    Assignee: Conti Temic microelectronic GmbH
    Inventor: Matthias Zobel
  • Patent number: 9367742
    Abstract: An object monitoring apparatus includes: an image receiver to receive at least one frame of captured images; an edge image generator to generate an edge image by detecting edges of objects appearing in the frame; a reference image generator to generate a reference image by detecting a part corresponding to a background in the frame to thereby define the detected part as a background edge; a candidate object extractor to extract one or more candidate object pixels by comparing the edge image with the reference image, and to extract a candidate object by grouping the extracted candidate object pixels into the candidate object; and an object-of-interest determiner to determine whether the candidate object is an object-of-interest based on a size of the candidate object and a duration time of detection of the candidate object.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: June 14, 2016
    Assignee: SK TELECOM CO., LTD.
    Inventors: Hee-yul Lee, Young-gwan Jo
  • Patent number: 9361534
    Abstract: An image recognition apparatus determines whether an image of a pedestrian is captured in a frame of video data captured by a vehicle mounted camera. A pre-processing unit determines a detection block from within a frame, and cuts out block image data corresponding to the detection block from the frame. Block data with a predetermined size that is smaller than the size of the detection block is created from the block image data. A neuro calculation unit executes neuro calculation on the block data, and calculates an output synapse. A post-processing unit determines whether a pedestrian exists within the detection block on the basis of the output synapse. When a pedestrian is detected, the post-processing unit creates result data, which is obtained by superimposing the detection block within which the pedestrian was detected onto the frame.
    Type: Grant
    Filed: July 25, 2012
    Date of Patent: June 7, 2016
    Assignee: MegaChips Corporation
    Inventors: Yusuke Mizuno, Daisuke Togo
  • Patent number: 9299022
    Abstract: Apparatus and methods for an extensible robotic device with artificial intelligence and receptive to training controls. In one implementation, a modular robotic system that allows a user to fully select the architecture and capability set of their robotic device is disclosed. The user may add/remove modules as their respective functions are required/obviated. In addition, the artificial intelligence is based on a neuronal network (e.g., spiking neural network), and a behavioral control structure that allows a user to train a robotic device in manner conceptually similar to the mode in which one goes about training a domesticated animal such as a dog or cat (e.g., a positive/negative feedback training paradigm) is used. The trainable behavior control structure is based on the artificial neural network, which simulates the neural/synaptic activity of the brain of a living organism.
    Type: Grant
    Filed: August 26, 2014
    Date of Patent: March 29, 2016
    Assignee: QUALCOMM TECHNOLOGIES INC.
    Inventors: Marius Buibas, Charles Wheeler Sweet, III, Mark S. Caskey, Jeffrey Alexander Levin
  • Patent number: 9256795
    Abstract: Various embodiments enable the identification of semi-structured text entities in an imager. The identification of the text entities is a relatively simple problem when the text is stored in a computer and free of errors, but much more challenging if the source is the output of an optical character recognition (OCR) engine from a natural scene image. Accordingly, output from an OCR engine is analyzed to isolate a character string indicative of a text entity. Each character of the string is then assigned to a character class to produce a character class string and the text entity of the string is identified based in part on a pattern of the character class string.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: February 9, 2016
    Assignee: A9.com, Inc.
    Inventors: Douglas Ryan Gray, Xiaofan Lin, Arnab Sanat Kumar Dhua, Yu Lou
  • Patent number: 9202144
    Abstract: Systems and methods are disclosed for detecting an object in an image by determining convolutional neural network responses on the image; mapping the responses back to their spatial locations in the image; and constructing features densely extract shift invariant activations of a convolutional neural network to produce dense features for the image.
    Type: Grant
    Filed: October 17, 2014
    Date of Patent: December 1, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Xiaoyu Wang, Yuanqing Lin, Will Zou, Miao Sun
  • Patent number: 9177550
    Abstract: Various technologies described herein pertain to conservatively adapting a deep neural network (DNN) in a recognition system for a particular user or context. A DNN is employed to output a probability distribution over models of context-dependent units responsive to receipt of captured user input. The DNN is adapted for a particular user based upon the captured user input, wherein the adaption is undertaken conservatively such that a deviation between outputs of the adapted DNN and the unadapted DNN is constrained.
    Type: Grant
    Filed: March 6, 2013
    Date of Patent: November 3, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Dong Yu, Kaisheng Yao, Hang Su, Gang Li, Frank Seide
  • Patent number: 9177246
    Abstract: Apparatus and methods for an extensible robotic device with artificial intelligence and receptive to training controls. In one implementation, a modular robotic system that allows a user to fully select the architecture and capability set of their robotic device is disclosed. The user may add/remove modules as their respective functions are required/obviated. In addition, the artificial intelligence is based on a neuronal network (e.g., spiking neural network), and a behavioral control structure that allows a user to train a robotic device in manner conceptually similar to the mode in which one goes about training a domesticated animal such as a dog or cat (e.g., a positive/negative feedback training paradigm) is used. The trainable behavior control structure is based on the artificial neural network, which simulates the neural/synaptic activity of the brain of a living organism.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: November 3, 2015
    Assignee: QUALCOMM TECHNOLOGIES INC.
    Inventors: Marius Buibas, Charles Wheeler Sweet, III, Mark S. Caskey, Jeffrey Alexander Levin
  • Patent number: 9158976
    Abstract: Local models learned from anomaly detection are used to rank detected anomalies. The local models include image feature values extracted from an image field of video image data with respect to different predefined spatial and temporal local units, wherein anomaly results are determined by failures to fit to applied anomaly detection module local models. Image features values extracted from the image field local units associated with anomaly results are normalized, and image feature values extracted from the image field local units are clustered. Weights for anomaly results are learned as a function of the relations of the normalized extracted image feature values to the clustered image feature values. The normalized values are multiplied by the learned weights to generate ranking values to rank the anomalies.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: October 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ankur Datta, Balamanohar Paluri, Sharathchandra U. Pankanti, Yun Zhai
  • Patent number: 9058536
    Abstract: Various embodiments enable a computing device to capture multiple images (or video) of text and provide at least a portion of the same to a recognizer to separately recognize text from each image. Each of the recognized outputs will typically include one or more text strings for each image. Substrings common to each of the one or more text strings are computed and compared to each text string within each image to determine an alignment consensus for each substring within the text. A template string is generated that includes each common substring in a position corresponding to a determined alignment for a respective substring. A character frequency vote is then applied to unresolved portions and the final text string is determined by filling the unresolved spaces with the character having the highest occurrence rate for a respective space.
    Type: Grant
    Filed: September 26, 2012
    Date of Patent: June 16, 2015
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Chang Yuan, Geoffrey Scott Heller, Louis L. LeGrand, III, Daniel Bibireata