Patents Examined by Vincent Rudolph
  • Patent number: 10325183
    Abstract: An improved system and method for digital image classification is provided. A host computer having a processor is coupled to a memory storing thereon reference feature data. A graphics processing unit (GPU) having a processor is coupled to the host computer and is configured to obtain, from the host computer, feature data corresponding to the digital image; to access, from the memory, the one or more reference feature data; and to determine a semi-metric distance based on a Poisson-Binomial distribution between the feature data and the one or more reference feature data. The host computer is configured to classify the digital image using the determined semi-metric distance.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: June 18, 2019
    Assignee: Temasek Life Sciences Laboratory Limited
    Inventors: Muthukaruppan Swaminathan, Tobias Sjoblom, Ian Cheong, Obdulio Piloto
  • Patent number: 10318842
    Abstract: A learning method for learning parameters of convolutional neural network (CNN) by using multiple video frames is provided. The learning method includes steps of: (a) a learning device applying at least one convolutional operation to a (t-k)-th input image corresponding to a (t-k)-th frame and applying at least one convolutional operation to a t-th input image corresponding to a t-th frame following the (t-k)-th frame, to thereby obtain a (t-k)-th feature map corresponding to the (t-k)-th frame and a t-th feature map corresponding to the t-th frame; (b) the learning device calculating a first loss by referring to each of at least one distance value between each of pixels in the (t-k)-th feature map and each of pixels in the t-th feature map; and (c) the learning device backpropagating the first loss to thereby optimize at least one parameter of the CNN.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: June 11, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10319094
    Abstract: A system for analyzing images of objects such as vehicles. According to certain aspects, the system includes a user interface device configured to capture a set of images depicting a target vehicle, and transfer the set of images to a server that stores a set of base image models. The server analyzes the set of images using a base image model corresponding to the target vehicle, a set of correlational filters, and a set of convolutional neural networks (CNNs) to determine a set of changes to the target vehicle as depicted in the set of images. The server further transmits, to the user interface device, information indicative of the set of changes for a user to view or otherwise access.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: June 11, 2019
    Assignee: CCC INFORMATION SERVICES INC.
    Inventors: Ke Chen, John L. Haller, Athinodoros S. Georghiades, Takeo Kanade
  • Patent number: 10319101
    Abstract: Various embodiments are directed to deriving spatial attributes for imaged objects utilizing three-dimensional (3D) information. A server may obtain 3D survey data about an object from a pre-existing source. The server may then receive image data describing the object from a user device. The server may then utilize range imagery techniques to build a 3D point cloud from imagery in a pixel space. The server may then utilize horizontal positioning to place the 3D point cloud in proximity to the 3D survey data. The server may then fit the 3D survey data to the 3D point cloud. Finally, the server may record measurements and absolute locations of interest from the 3D point cloud and send them to the user device.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: June 11, 2019
    Assignee: Quantum Spatial, Inc.
    Inventors: David Brandt, Andrew Wakefield, Logan McConnell, Anand Iyer
  • Patent number: 10321132
    Abstract: Systems and methods for detecting motion in compressed video are provided. Some methods can include parsing a stream of compressed video, obtaining macroblock size information from the parsed stream, computing factors derived from the macroblock size information, computing adaptive threshold values derived from relative frame characteristics of the compressed video, comparing the factors derived from the macroblock size information with the adaptive threshold values, and detecting motion based upon the comparing when at least one of the factors exceeds at least one of the adaptive threshold values. In some embodiments, detecting the motion can include performing spatio-temporal filtering on macroblocks in which the motion is detected or performing spatio-temporal filtering on at least one non-motion macroblock.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: June 11, 2019
    Assignee: Honeywell International Inc.
    Inventors: Yadhunandan Us, Gurumurthy Swaminathan, Kwong Wing Au
  • Patent number: 10303977
    Abstract: According to exemplary methods of training a convolutional neural network, input images are received into a computerized device having an image processor. The image processor evaluates the input images using first convolutional layers. The number of first convolutional layers is based on a first size for the input images. Each layer of the first convolutional layers receives layer input signals comprising features of the input images and generates layer output signals that include signals from the input images and ones of the layer output signals from previous layers within the first convolutional layers. Responsive to an input image being a second size larger than the first size, additional convolutional layers are added to the convolutional neural network. The number of additional convolutional layers is based on the second size in relation to the first size. The additional convolutional layers are initialized using weights from the first convolutional layers.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: May 28, 2019
    Assignee: Conduent Business Services, LLC
    Inventors: Safwan R. Wshah, Beilei Xu, Orhan Bulan, Jess R. Gentner, Peter Paul
  • Patent number: 10303980
    Abstract: A method for learning parameters of a CNN capable of detecting obstacles in a training image is provided. The method includes steps of: a learning device (a) receiving the training image and instructing convolutional layers to generate encoded feature maps from the training image; (b) instructing the deconvolutional layers to generate decoded feature maps; (c) supposing that each cell of a grid with rows and columns is generated by dividing the decoded feature map with respect to a direction of the rows and the columns, concatenating features of the rows per column in a direction of a channel, to generate a reshaped feature map; (d) calculating losses referring to the reshaped feature map and its GT image in which each row is indicated as corresponding to GT positions where a nearest obstacle is on column from its corresponding lowest cell thereof along the columns; and (e) backpropagating the loss.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: May 28, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10303960
    Abstract: An image processing device includes a contrasting correction unit, a color analysis unit, an object detection unit, and an object relation analysis unit. The contrasting correction unit creates a contrasting corrected image by correcting contrasting of an input image captured by a vehicle camera. The color analysis unit creates a color corrected image by correcting colors of the input image. The object detection unit detects a main sign included in the input image based on the contrasting corrected image and detects an auxiliary sign included in the input image based on the color corrected image. The object relation analysis unit recognizes a traffic sign as a combination of a main sign and auxiliary sign by associating the main sign and the auxiliary sign with each other based on a positional relationship between the main and auxiliary signs, which are detected by the object detection unit, in the input image.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: May 28, 2019
    Assignee: HITACHI, LTD.
    Inventors: Hideharu Hattori, Yoshifumi Izumi
  • Patent number: 10304248
    Abstract: Provided is a method of providing an augmented reality interaction service, the method including: generating reference coordinates based on a 3-dimensional (3D) image including depth information obtained through a camera; segmenting a region corresponding to a pre-set object from the 3D image including the depth information obtained through the camera, based on depth information of the pre-set object and color space conversion; segmenting a sub-object having a motion component from the pre-set object in the segmented region, and detecting a feature point by modeling the sub-object and a palm region linked to the sub-object based on a pre-set algorithm; and controlling a 3D object for use of an augmented reality service by estimating a posture of the sub-object based on joint information of the pre-set object provided through a certain user interface (UI).
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: May 28, 2019
    Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Woon Tack Woo, Tae Jin Ha
  • Patent number: 10297010
    Abstract: A method for reducing grid line artifacts in an X-ray image is disclosed, which includes acquiring an X-ray image by scanning an object, wherein the X-ray image comprises grid line artifacts; decomposing the X-ray image into a high frequency image and a low frequency image, wherein the high frequency image comprises the grid line artifacts; filtering the high frequency image to reduce the grid line artifacts in the high frequency image so as to obtain a filtered high frequency image; and combining the filtered high frequency image with the low frequency image to reconstruct an output image. A system adopting the above method is also disclosed.
    Type: Grant
    Filed: March 15, 2017
    Date of Patent: May 21, 2019
    Assignee: GENERAL ELECTRIC COMPANY
    Inventors: Zhaoxia Zhang, Kun Tao, Hao Lai, Ming Yan, Han Kang
  • Patent number: 10285762
    Abstract: A computer implemented method for assessing an arterio-venous malformation (AVM) may include, for example, receiving a patient-specific model of a portion of an anatomy of a patient; using a computer processor to analyze the patient-specific model for identifying one or more blood vessels associated with the AVM, in the patient-specific model; and estimating a risk of an undesirable outcome caused by the AVM, by performing computer simulations of blood flow through the one or more blood vessels associated with the AVM in the patient-specific model.
    Type: Grant
    Filed: May 11, 2018
    Date of Patent: May 14, 2019
    Assignee: HeartFlow, Inc.
    Inventors: Sethuraman Sankaran, Christopher K. Zarins, Leo Grady
  • Patent number: 10290373
    Abstract: A patient couch with a control system and a method for controlling the patient couch are provided. The control system is based on a three-dimensional (3D) camera for recording first 3D images of a recording area. The 3D camera is attached to the patient couch. An image processing unit for identifying at least one control gesture in the first 3D images allows a safe and fast control of the x-ray device. In order to carry out the control, the control system includes a control unit with a motor for controlling a movement of the patient couch based on the first control gesture.
    Type: Grant
    Filed: May 6, 2015
    Date of Patent: May 14, 2019
    Assignee: Siemens Aktiengesellschaft
    Inventors: Anja Jäger, Robert Kagermeier
  • Patent number: 10282606
    Abstract: In an example embodiment, a web page is obtained using a web page address stored in a first record and is parsed to extract one or more images from the web page along with a first plurality of features for each of the one or more images from the web page. Information about each image of the web page and the extracted first plurality of features for the web page are input into a supervised machine learning classifier to calculate a logo confidence score for each image of the web page, the logo confidence score indicating the probability that the image is an organization logo. In response to a particular image in the web page having a logo confidence score transgressing a first threshold, the particular image is injected into an organization logo field of the first record.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: May 7, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Songtao Guo, Christopher Matthew Degiere, Jingjing Huang, Aarti Kumar, Alex Ching Lai, Xian Li
  • Patent number: 10282840
    Abstract: An image reporting method is provided. The image reporting method comprises the steps of retrieving an image representation of a sample structure from an image source; mapping a generic structure to the sample structure, the generic structure being related to the sample structure; determining a region of interest within the sample structure based on content of the image representation of the sample structure; providing a focused set of representations of diagnostic knowledge which is contextually appropriate to the region of interest and prompting the user to select at least one diagnostic finding from the focused set of knowledge representation or by entering free-form text; and generating a diagnostic report based on the selections and free-form text entries.
    Type: Grant
    Filed: November 30, 2013
    Date of Patent: May 7, 2019
    Inventors: Armin Moehrle, Crandon F Clark
  • Patent number: 10275667
    Abstract: A learning method of a CNN capable of detecting one or more lanes using a lane model is provided. The method includes steps of: a learning device (a) acquiring information on the lanes from at least one image data set, wherein the information on the lanes are represented by respective sets of coordinates of pixels on the lanes; (b) calculating one or more function parameters of a lane modeling function of each of the lanes by using the coordinates of the pixels on the lanes; and (c) performing processes of classifying the function parameters into K cluster groups by using a clustering algorithm, assigning each of one or more cluster IDs to each of the cluster groups, and generating a cluster ID GT vector representing GT information on probabilities of being the cluster IDs corresponding to types of the lanes.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: April 30, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10262214
    Abstract: A learning method of a CNN for detecting lanes is provided. The method includes steps of: a learning device (a) instructing convolutional layers to generate feature maps by applying convolution operations to an input image from an image data set; (b) instructing an FC layer to generate an estimated result vector of cluster ID classifications of the lanes by feeding a specific feature map among the feature maps into the FC layer; and (c) instructing a loss layer to generate a classification loss by referring to the estimated result vector and a cluster ID GT vector, and backpropagate the classification loss, to optimize device parameters of the CNN; wherein the cluster ID GT vector is GT information on probabilities of being cluster IDs per each of cluster groups assigned to function parameters of a lane modeling function by clustering the function parameters based on information on the lanes.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: April 16, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10262429
    Abstract: The marker has thereon one three-dimensional indicator 3. Based on the location and height of the three-dimensional indicator 3 relative to the plane indicator 2 as well as the estimated position of the camera relative to the marker, the center positions of the upper face and the lower face of the three-dimensional indicator 3 on the image of the camera are estimated. The estimated center position of the upper face of the three-dimensional indicator 3 is compared with that detected from the image. When the error between them has a value equal to or higher than a predetermined value, based on the posture estimated, a rotational transformation is carried out.
    Type: Grant
    Filed: September 8, 2014
    Date of Patent: April 16, 2019
    Assignee: NATIONAL INSTITUTE OF ADVANCED INDUSTRIAL SCIENCE
    Inventor: Hideyuki Tanaka
  • Patent number: 10255503
    Abstract: There is disclosed a method for generating movie recommendations, based on automatic extraction of features from a multimedia content, wherein the extracted features are visual features representing mise-en-scène characteristics of the movie defined on the basis of Applied Media Aesthetic theory, said extracted features being then fed to content-based recommendation algorithm in order to generate personalized recommendation.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: April 9, 2019
    Assignee: POLITECNICO DI MILANO
    Inventors: Paolo Cremonesi, Mehdi Elahi, Yashar Deldjoo
  • Patent number: 10255549
    Abstract: In an approach to managing images and captions, one or more computer processors receive one or more captured images of including one or more subjects. The one or more computer processors identify the one or more subjects from the first image. The one or more computer processors identify the context of the first image of the one or more captured images containing the one or more subjects. The one or more computer processors analyze one or more social networking histories and relationships associated with the one or more subjects using recognition techniques. The one or more computer processors create one or more captions associated with the first image of the one or more captured images based on the social networking histories and relationships of the one or more subjects and the identified context of the first image of the one or more captured images containing the one or more subjects.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: April 9, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kuntal Dey, Seema Nagar, Srikanth G. Tamilselvam, Enara C. Vijil
  • Patent number: 10235571
    Abstract: The present invention provides a method for video matting via sparse and low-rank representation, which firstly selects frames which represent video characteristics in input video as keyframes, then trains a dictionary according to known pixels in the keyframes, next obtains a reconstruction coefficient satisfying the restriction of low-rank, sparse and non-negative according to the dictionary, and sets the non-local relationship matrix between each pixel in the input video according to the reconstruction coefficient, meanwhile sets the Laplace matrix between multiple frames, obtains a video alpha matte of the input video, according to ? values of the known pixels of the input video and ? values of sample points in the dictionary, the non-local relationship matrix and the Laplace matrix; and finally extracts a foreground object in the input video according to the video alpha matte, therefore improving quality of the extracted foreground object.
    Type: Grant
    Filed: June 14, 2016
    Date of Patent: March 19, 2019
    Assignee: BEIHANG UNIVERSITY
    Inventors: Xiaowu Chen, Dongqing Zou, Guangying Cao, Xiaogang Wang