Patents Examined by Menatoallah Youssef
  • Patent number: 10108879
    Abstract: The present disclosure includes techniques for selecting a candidate presentation style for individual documents for inclusion in an aggregate training data set for a document type that may be used to train an OCR processing engine prior to identifying text in an image of a document of the document type. In one embodiment, text input corresponding to a text sample in a document is received, and an image of the text sample in the document is received. For each of a plurality of candidate presentation styles, an OCR processing engine is trained using a training data set corresponding to the given candidate presentation style, and the OCR processing engine is used, as trained, to identify text in the received image. The OCR processing results for each candidate presentation style are compared to the received text input. A candidate presentation style for the document is selected based on the comparisons.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: October 23, 2018
    Assignee: Intuit inc.
    Inventors: Eugene Krivopaltsev, Sreeneel K. Maddika, Vijay S. Yellapragada
  • Patent number: 10068132
    Abstract: Vehicles and other items often have corresponding documentation, such as registration cards, that includes a significant amount of informative textual information that can be used in identifying the item. Traditional OCR may be unsuccessful when dealing with non-cooperative images. Accordingly, features such as dewarping, text alignment, and line identification and removal may aid in OCR of non-cooperative images. Dewarping involves determining curvature of a document depicted in an image and processing the image to dewarp the image of the document to make it more accurately conform to the ideal of a cooperative image. Text alignment involves determining an actual alignment of depicted text, even when the depicted text is not aligned with depicted visual cues. Line identification and removal involves identifying portions of the image that depict lines and removing those lines prior to OCR processing of the image.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: September 4, 2018
    Assignee: eBay Inc.
    Inventors: Braddock Gaskill, Robinson Piramuthu
  • Patent number: 10055836
    Abstract: A system and method for automated contrast arrival detection in temporally phased images or datasets of tissues effectively determines contrast arrival in regions that are substantially free of arteries. A plurality of tissue voxels in a plurality of temporally phased images are identified as a function of voxel enhancement characteristics associated with discrete tissue voxels. A processor/process computes average enhancement characteristics from the plurality of identified tissue voxels. The average enhancement characteristics are compared with predetermined average enhancement characteristics associated with contrast media arrival phases. Contrast media arrival phases in the temporally phased images are provided based on the comparison.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: August 21, 2018
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventor: Naira Muradyan
  • Patent number: 10049258
    Abstract: A method of preprocessing an image including biological information is disclosed, in which an image preprocessor may set an edge line in an input image including biological information, calculate an energy value corresponding to the edge line, and adaptively crop the input image based on the energy value.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: August 14, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kyuhong Kim, Wonjun Kim, Youngsung Kim, Sungjoo Suh
  • Patent number: 10032092
    Abstract: Techniques are described to generate improved training data for pixel labeling. To generate training data, objects are displayed in a user interface by a computing device, e.g., iteratively. The objects are taken from a structured object representation associated with a respective one of a plurality of images. The structured object representation defines a hierarchical relationship of the objects within the respective image. Inputs are then received that are originated through user interaction with the user interface. The inputs label respective ones of the iteratively displayed objects, e.g., as text, a graphical element, background, foreground, and so forth. A model is trained by the computing device using machine learning.
    Type: Grant
    Filed: February 2, 2016
    Date of Patent: July 24, 2018
    Assignee: ADOBE SYSTEMS INCORPORATED
    Inventors: Aaron P. Hertzmann, Saining Xie, Bryan C. Russell
  • Patent number: 10032073
    Abstract: Detecting an aspect ratio of an image captured with a smartphone includes detecting at least one convex quadrangle of arbitrary shape on the image and generating a plurality of additional convex quadrangles having vertices in a pre-determined vicinity of vertices of the quadrangle on the image. A linear projective mapping matrix is generated for mapping each of the quadrangle and the plurality of additional quadrangles onto a unit square. A plurality of estimated focal lengths of the camera of the smartphone is determined according to matrixes corresponding to the linear projective mappings onto a unit square of the quadrangle and each of the plurality of additional quadrangles. The quadrangle is used to determine the aspect ratio of the image in response to a range of the plurality of estimated focal lengths including a true value of the focal length of the camera of the smartphone.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: July 24, 2018
    Assignee: EVERNOTE CORPORATION
    Inventors: Ilya Buryak, Eugene Livshitz, Alexander Pashintsev, Boris Gorbatov
  • Patent number: 10026013
    Abstract: A clustering method with a two-stage local binary pattern includes generating a gradient direction value according to a center sub-block and neighbor sub-blocks of a patch of an image; quantizing the gradient direction value, thereby generating a quantized gradient direction value; generating a gradient magnitude value according to the gradient direction value; quantizing the gradient magnitude value, thereby generating a quantized gradient magnitude value; concatenating the quantized gradient direction value and the quantized gradient magnitude value to generate a two-stage local binary pattern (2SLBP) value; and performing clustering of super-resolution imaging by using the 2SLBP value as an index.
    Type: Grant
    Filed: September 21, 2016
    Date of Patent: July 17, 2018
    Assignees: NCKU Research and Development Foundation, Himax Technologies Limited
    Inventors: Ming-Der Shieh, Fang-Kai Hsu, Chun-Wei Chen, Der-Wei Yang
  • Patent number: 10021364
    Abstract: A method of building a stereoscopic model with Kalman filtering (KF) is provided. The method entails capturing images of the environment with a sensing device to build the stereoscopic model and then correcting a static object and a dynamic object in the environmental images with Kalman filtering to enhance the accuracy of the stereoscopic model. The prior art is a great reduction of accuracy in simultaneous localization and mapping (SLAM) in the event of increased system variation, increased complexity, or increased involved field. The method overcomes a drawback of the prior art.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: July 10, 2018
    Assignee: CHUNG YUAN CHRISTIAN UNIVERSITY
    Inventors: Po-Ting Lin, Shu-Ping Lin, Wei-Hao Lu
  • Patent number: 10013630
    Abstract: Various embodiments provide methods and systems for detecting one or more segments of an image that are related to a particular object in the image (e.g., a logo or trademark) and extracting at least one feature point, each of which is represented by one feature point descriptor, based at least upon a contour curvature of the one or more segments. The at least one feature point descriptor can be converted into one or more codewords to generate a codeword database. A discriminative codebook can then be generated based upon the codeword database and utilized to detect objects and/or features in a query image.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: July 3, 2018
    Assignee: A9.com, Inc.
    Inventor: William Brendel
  • Patent number: 10013612
    Abstract: A system for analyzing scene traits in an object recognition ingestion ecosystem is presented. In some embodiment, a trait analysis engine analyzes a digital representation of a scene to derive one or more features. The features are compiled into sets of similar features with respect to a feature space. The engine attempts to discover which traits of the scene (e.g., temperature, lighting, gravity, etc.) can be used to distinguish the features for purposes of object recognition. When such distinguishing traits are found, an object recognition database is populated with object information, possibly indexed according to the similar features and their corresponding distinguishing traits.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: July 3, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: David McKinnon, John Wiacek, Jeremi Sudol, Kamil Wnuk, Bing Song
  • Patent number: 10013602
    Abstract: A feature vector extraction device includes a cell learning unit setting a plurality of cells representing a position and range for counting a feature vector of a target on the basis of a plurality of images containing a target for learning use. A normalizer selects two feature points from among three or more feature points which are set in an image and represent the target for learning use, and normalizes a size and direction of each of the feature points. A feature point calculator calculates a mean position and a variation from the relevant mean position for each of other feature points than the selected two feature points of the normalized feature points. A cell decision unit decides a position of each of the cells on the basis of the mean position and decides a size of the each of the cells on the basis of the variation.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: July 3, 2018
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventor: Nobuhiro Nonogaki
  • Patent number: 10002307
    Abstract: The disclosure includes a system and method for classifying conditions of a data stream of object information. An image recognition receives an image and identifies a plurality of objects from the image. The image recognition application generates a data stream including information about the plurality of objects. The image recognition application generates a score based on the information about the plurality of products, determines a condition from the data stream based on the score, and generates a suggestion based on the condition. The image recognition application further provides the suggestion to a user.
    Type: Grant
    Filed: May 25, 2016
    Date of Patent: June 19, 2018
    Assignee: Ricoh Co., Ltd.
    Inventors: Jonathan Zaremski, Michael Griffin
  • Patent number: 10002419
    Abstract: A method for computing image-derived biomarkers includes receiving image data defining a three-dimensional image volume representative of an anatomical region of interest. Features characterizing local variations of intensity in the image data using an intensity model are identified. The features are used to perform one or more modeling computations directly on the image data to derive information related to a biomarker of interest.
    Type: Grant
    Filed: March 5, 2015
    Date of Patent: June 19, 2018
    Assignee: Siemens Healthcare GmbH
    Inventors: Saikiran Rapaka, Puneet Sharma, Atilla Peter Kiraly
  • Patent number: 9978153
    Abstract: An image of a test receptacle is received, which includes a color of a reaction between a test substance and at least one reagent in the test receptacle, an alignment code having test receptacle identification information, and at least one color calibration block. An array of pixels of RGB values for each pixel in the image is collected. A captured color of the color calibration block is evaluated. An offset is determined for the captured color in the one color calibration block if the evaluation of at the color calibration block in the image determines that the captured color deviates from a baseline color. The offset is applied to each pixel in the image to correct the captured image. A colorimetric analysis is performed on the reaction between the test substance and the at least one reagent.
    Type: Grant
    Filed: May 17, 2016
    Date of Patent: May 22, 2018
    Assignee: DETECTACHEM LLC
    Inventors: Travis Kisner, Derek Roosken, Brendon Tower
  • Patent number: 9971959
    Abstract: In one embodiment of the present invention, a graphics processing unit (GPU) is configured to detect an object in an image using a random forest classifier that includes multiple, identically structured decision trees. Notably, the application of each of the decision trees is independent of the application of the other decision trees. In operation, the GPU partitions the image into subsets of pixels, and associates an execution thread with each of the pixels in the subset of pixels. The GPU then causes each of the execution threads to apply the random forest classifier to the associated pixel, thereby determining a likelihood that the pixel corresponds to the object. Advantageously, such a distributed approach to object detection more fully leverages the parallel architecture of the parallel processing unit (PPU) than conventional approaches. In particular, the PPU performs object detection more efficiently using the random forest classifier than using a cascaded classifier.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: May 15, 2018
    Assignee: NVIDIA Corporation
    Inventors: Mateusz Jerzy Baranowski, Shalini Gupta, Elif Albuz
  • Patent number: 9959602
    Abstract: An image processing device and a radiography apparatus each include a pixel selection unit configured to select pixels of an image based on pixel values of pixels of the image obtained by capturing an image of a subject, and a subtraction processing unit configured to subtract, from the image, a line artifact extracted using a profile in predetermined direction and is based on the pixels selected by the pixel selection unit.
    Type: Grant
    Filed: March 12, 2014
    Date of Patent: May 1, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tsuyoshi Kobayashi
  • Patent number: 9953222
    Abstract: A computer-implemented method for selecting representative frames for videos is provided. The method includes receiving a video and identifying a set of features for each of the frames of the video. The features including frame-based features and semantic features. The semantic features identifying likelihoods of semantic concepts being present as content in the frames of the video. A set of video segments for the video is subsequently generated. Each video segment includes a chronological subset of frames from the video and each frame is associated with at least one of the semantic features. The method generates a score for each frame of the subset of frames for each video segment based at least on the semantic features, and selecting a representative frame for each video segment based on the scores of the frames in the video segment. The representative frame represents and summarizes the video segment.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: April 24, 2018
    Assignee: Google LLC
    Inventors: Sanketh Shetty, Tomas Izo, Min-Hsuan Tsai, Sudheendra Vijayanarasimhan, Apostol Natsev, Sami Abu-El-Haija, George Dan Toderici, Susanna Ricco, Balakrishnan Varadarajan, Nicola Muscettola, WeiHsin Gu, Weilong Yang, Nitin Khandelwal, Phuong Le
  • Patent number: 9940548
    Abstract: An image recognition method includes: receiving an image; acquiring processing result information including values of processing results of convolution processing at positions of a plurality of pixels that constitute the image by performing the convolution processing on the image by using different convolution filters; determining 1 feature for each of the positions of the plurality of pixels on the basis of the values of the processing results of the convolution processing at the positions of the plurality of pixels included in the processing result information and outputting the determined feature for each of the positions of the plurality of pixels; performing recognition processing on the basis of the determined feature for each of the positions of the plurality of pixels; and outputting recognition processing result information obtained by performing the recognition processing.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: April 10, 2018
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yasunori Ishii, Sotaro Tsukizawa, Reiko Hagawa
  • Patent number: 9922268
    Abstract: According to one embodiment, an image interpretation report creating apparatus that creates an image interpretation report including a finding and is associated with a key image, includes a key image selecting unit, a position detecting unit, a first local structure information generating unit, and a display. A key image selecting unit selects a sub-image as the key image from among a plurality of sub-images comprising a medical image. A position detecting unit detects a position of a characteristic local structure in a human body from the medical image. A first local structure information generating unit identifies a local structure in the key image or in a vicinity of the key image and generates information on the identified local structure as first local structure information. A display displays the first local structure information as a candidate to be entered in an entry field for the finding.
    Type: Grant
    Filed: March 4, 2015
    Date of Patent: March 20, 2018
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Taisuke Iwamura, Kousuke Sakaue, Masashi Yoshida, Shigeyuki Ishii, Satoshi Ikeda, Hitoshi Yamagata, Takashi Masuzawa, Naoki Sugiyama, Muneyasu Kazuno, Yosuke Okubo, Hiroyuki Yamasaki, Jun Kawakami, Takashi Kondo, Guang Yi Ong
  • Patent number: 9922414
    Abstract: In order to reduce the amount of time it takes to collect images of defects, this defect inspection device is provided with the following: a read-out unit that reads out positions of defects in a semiconductor wafer that have already been detected; a first imaging unit that takes, at a first magnification, a reference image of a chip other than the chip where one of the read-out defects is; a second imaging unit that takes, at the first magnification, a first defect image that contains the read-out defect; a defect-position identification unit that identifies the position of the defect in the first defect image taken by the second imaging unit by comparing said first defect image with the reference image taken by the first imaging unit; a third imaging unit that, on the basis of the identified defect position, takes a second defect image at a second magnification that is higher than the first magnification; a rearrangement unit that rearranges the read-out defects in an order corresponding to a path that goes
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: March 20, 2018
    Assignee: HITACHI HIGH-TECHNOLOGIES CORPORATION
    Inventors: Yuji Takagi, Minoru Harada, Masashi Sakamoto, Takehiro Hirai