Patents Examined by Xuemei Chen
  • Patent number: 10217030
    Abstract: A computer-implemented method and a system are proposed. According to the method, in response to receiving a character, a first representation of the character is generated by performing word embedding processing on the character. The first representation is related to context of the character. A second representation of the character is generated by performing convolutional neural network (CNN) processing on the character. The second representation is related to a hieroglyphic feature of the character. A label for the character is determined by performing recurrent neural network (RNN) processing on the first representation and the second representation. The label indicates an attribute of the character related to the context.
    Type: Grant
    Filed: June 14, 2017
    Date of Patent: February 26, 2019
    Assignee: International Business Machines Corporation
    Inventors: Dongxu Duan, Jian Min Jiang, Zhong Su, Li Zhang, Shi Wan Zhao
  • Patent number: 10204290
    Abstract: A decision tree and normalized reclassification are used to classify defects. Defect review sampling and normalization can be used for accurate Pareto ranking and defect source analysis. A defect review system, such as a broadband plasma tool, and a controller can be used to bin defects using the decision tree based on defect attributes and design attributes. Class codes are assigned to at least some of the defects in each bin. Normalized reclassification assigns a class code to any unclassified defects in a bin. Additional decision trees can be used if any bin has more than one class code after normalized reclassification.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: February 12, 2019
    Assignee: KLA-Tencor Corporation
    Inventor: Poh Boon Yong
  • Patent number: 10204289
    Abstract: A computer-implemented method and a system are proposed. According to the method, in response to receiving a character, a first representation of the character is generated by performing word embedding processing on the character. The first representation is related to context of the character. A second representation of the character is generated by performing convolutional neural network (CNN) processing on the character. The second representation is related to a hieroglyphic feature of the character. A label for the character is determined by performing recurrent neural network (RNN) processing on the first representation and the second representation. The label indicates an attribute of the character related to the context.
    Type: Grant
    Filed: December 20, 2017
    Date of Patent: February 12, 2019
    Assignee: International Business Machines Corporation
    Inventors: Dongxu Duan, Jian Min Jiang, Zhong Su, Li Zhang, Shi Wan Zhao
  • Patent number: 10185888
    Abstract: An image processing system and method for determining an intrinsic color component of one or more objects present in a sequence of frames, for use in rendering the object(s), is described. At least some of the frames of the sequence are to be used as lighting keyframes. A lighting estimate for a lighting keyframe A of the sequence of frames is obtained. An initial lighting estimate for a lighting keyframe B of the sequence of frames is determined. A refined lighting estimate for the lighting keyframe B is determined based on: (i) the initial lighting estimate for the lighting keyframe B, and (ii) the lighting estimate for the lighting keyframe A. The refined lighting estimate for the lighting keyframe B is used to separate image values representing the object(s) in the lighting keyframe B into an intrinsic color component and a shading component, for use in rendering the object(s).
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: January 22, 2019
    Assignee: Imagination Technologies Limited
    Inventors: James Imber, Adrian Hilton, Jean-Yves Guillemaut
  • Patent number: 10181183
    Abstract: An image processing system and method for determining an intrinsic color component of one or more objects for use in rendering the object(s) is described herein. One or more input images are received, each representing a view of the object(s), wherein values of each of the input image(s) are separable into intrinsic color estimates and corresponding shading estimates. A depth image represents depths of the object(s). Coarse intrinsic color estimates are determined using the input image(s). The intrinsic color component is determined by applying bilateral filtering to the coarse intrinsic color estimates using bilateral filtering guidance terms based on depth values derived from the depth image.
    Type: Grant
    Filed: October 25, 2016
    Date of Patent: January 15, 2019
    Assignee: Imagination Technologies Limited
    Inventors: James Imber, Adrian Hilton, Jean-Yves Guillemaut
  • Patent number: 10176409
    Abstract: Embodiments of the present disclosure disclose an image character recognition model generation method and apparatus, and a vertically-oriented character image recognition method and apparatus. The image character recognition model generation method includes: generating a rotated line character training sample, wherein the rotated line character training sample includes a rotated line character image and an expected character recognition result corresponding to the rotated line character image, and there is a difference of 90 degrees between character units in the rotated line character image and character units in a standard line character image; and training a set neural network by using the rotated line character training sample, to generate an image character recognition model.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: January 8, 2019
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Shufu Xie, Hang Xiao
  • Patent number: 10157451
    Abstract: A technique to remove fog from an image more appropriately has been called for. An image processing system is provided, including: a luminance evaluation value deriving unit that derives a luminance evaluation value of an at least partial region of an image; a saturation evaluation value deriving unit that derives a saturation evaluation value of the at least partial region of the image; a contrast evaluation value deriving unit that derives a contrast evaluation value of the at least partial region of the image; and a haze depth estimating unit that derives a haze depth estimate value of the image based on the luminance evaluation value, the saturation evaluation value, and the contrast evaluation value.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: December 18, 2018
    Assignee: EIZO Corporation
    Inventors: Masafumi Higashi, Takashi Nakamae, Reo Aoki
  • Patent number: 10157446
    Abstract: An image processing system and method for determining an intrinsic color component of one or more objects present in a sequence of frames, for use in rendering the object(s), are described. Some of the frames of the sequence are to be used as lighting keyframes. A lighting estimate for a lighting keyframe A of the sequence of frames is determined. A lighting estimate for a lighting keyframe B of the sequence of frames is determined. A lighting estimate for an intermediate frame positioned between the lighting keyframes A and B in the sequence is determined by interpolating between the lighting estimates determined for the lighting keyframes A and B of the sequence. The determined lighting estimate for the intermediate frame is used to separate image values representing the object(s) in the intermediate frame into an intrinsic color component and a shading component, for use in rendering the object(s).
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: December 18, 2018
    Assignee: Imagination Technologies Limited
    Inventors: James Imber, Adrian Hilton, Jean-Yves Guillemaut
  • Patent number: 10152650
    Abstract: A trademark retrieval method, comprising: establishing a sample trademark library and establishing a correlation between sample trademarks and division data for figurative element codes of known pending or registered figurative trademarks; extracting and processing image feature information about the sample trademarks, and establishing a correlation between the sample trademarks and the extracted image feature information; extracting image feature information about a trademark to be retrieved; carrying out matching retrieval by taking the image feature information as a retrieval condition, and finding out a sample trademark reaching a pre-determined similarity degree, and a sample trademark with the highest similarity degree and a corresponding figurative element code; acquiring and confirming a figurative element code of the trademark to be retrieved; taking the figurative element code as a retrieval condition to carry out matching retrieval, and finding out a matching sample trademark; collecting a result r
    Type: Grant
    Filed: March 8, 2016
    Date of Patent: December 11, 2018
    Inventor: Qing Xu
  • Patent number: 10148972
    Abstract: In an example embodiment, a received JPEG image compression format image includes one or more minimum coded units (ICUs). Each MCU is decoded using an image compression format decoder. Each decoded MCU is then split into multiple decoded subblocks. Each decoded subblock can then be encoded into texture compression format using a texture compression format encoder. Each encoded texture compression format subblock can then be passed to a graphical processing unit (GPU) for processing.
    Type: Grant
    Filed: January 8, 2016
    Date of Patent: December 4, 2018
    Assignee: Futurewei Technologies, Inc.
    Inventors: Jeff Moguillansky, Anthony Mazzola
  • Patent number: 10147025
    Abstract: A system and technique for visual indicator status recognition includes an imaging system configured to capture a first optical data of an indicator, the imaging system configured to verify indicator code indicia is derivable from the first optical data. Responsive to verifying that the indicator code indicia is derivable from the first optical data, the imaging system is configured to capture a second optical data of the indicator. The system and technique also includes a detection module executable by a processing unit to analyze the second optical data and derive status information of the indicator.
    Type: Grant
    Filed: July 17, 2013
    Date of Patent: December 4, 2018
    Assignee: ShockWatch, Inc.
    Inventors: Clinton A. Branch, Angela K. Kerr, Kevin M. Kohleriter
  • Patent number: 10140728
    Abstract: An encoder includes a processor and a memory coupled thereto. A digital image to be encoded is stored in the memory. The digital image includes an array of pixels, with each pixel having an RGB color value associated therewith. Image filtering is performed on the digital image and includes calculating an RGB Euclidean geometric distance between a current pixel and a prior pixel, comparing the calculated RGB Euclidean geometric distance to a threshold, and changing the RGB color value of the current pixel to the same RGB color value as the prior pixel when the calculated RGB Euclidean geometric distance is less than the threshold. Run length encoding is performed on the filtered digital image.
    Type: Grant
    Filed: August 11, 2016
    Date of Patent: November 27, 2018
    Assignee: Citrix Systems, Inc.
    Inventor: Muhammad Dawood
  • Patent number: 10135999
    Abstract: A method and a system for digitization of a document are disclosed. The document is scanned to generate an electronic document. One or more characters in a first set of portions of the electronic document are identified, based on a character recognition technique. Each portion in the first set of portions is classified in one or more groups based on at least a status of identification of the corresponding one or more characters. Further, one or more tasks are created for each of the one or more groups. The one or more tasks are transmitted to one or more crowdworkers, based at least on the respective type of the one or more tasks. Further, a response for each of the one or more tasks is received. Based on the received response, a digitized document is generated.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: November 20, 2018
    Assignee: CONDUENT BUSINESS SERVICES, LLC
    Inventors: Chithralekha Balamurugan, Meera Sampath, Rebecca Taylor, Leslie Stone
  • Patent number: 10127456
    Abstract: There is provided with an information processing apparatus. A designation of a position in a wide range captured image captured by an image capture apparatus that has an optical system for capturing the wide range captured image is received. A corrected partial region image that is obtained by performing a distortion correction, that reduces a distortion due to the optical system of the image capture apparatus, on a partial image corresponding to the position designated on the wide range captured image is generated. A passage detection line, for detecting a passage of a moving object, is set on the corrected partial region image in accordance with a designation by a user.
    Type: Grant
    Filed: March 23, 2016
    Date of Patent: November 13, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Tsuyoshi Morofuji
  • Patent number: 10117727
    Abstract: A method for 3-D cephalometric analysis acquires reconstructed volume image data from a computed tomographic scan of a patient's head. The acquired volume image data simultaneously displays from at least a first 2-D view and a second 2-D view. For an anatomical feature of the head, an operator instruction positions a reference mark corresponding to the feature on either the first or the second displayed 2-D view and the reference mark displays on each of the at least first and second displayed 2-D views. In at least the first and second displayed 2-D views, one or more connecting lines display between two or more of the positioned reference marks. One or more cephalometric parameters are derived according to the positioned reference marks, the derived parameters are displayed.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: November 6, 2018
    Assignee: Carestream Dental Technology Topco Limited
    Inventors: Shoupu Chen, Jean-Marc Inglese, Lawrence A. Ray, Jacques Treil
  • Patent number: 10104387
    Abstract: In an example embodiment, a received JPEG image compression format image includes one or more minimum coded units (ICUs). Each MCU is decoded using an image compression format decoder. Each decoded MCU is then split into multiple decoded subblocks. Each decoded subblock can then be encoded into texture compression format using a texture compression format encoder. Each encoded texture compression format subblock can then be passed to a graphical processing unit (GPU) for processing.
    Type: Grant
    Filed: January 8, 2016
    Date of Patent: October 16, 2018
    Assignee: Futurewei Technologies, Inc.
    Inventors: Jeff Moguillansky, Anthony Mazzola
  • Patent number: 10102614
    Abstract: To allow fog removal even in a densely foggy image. A fog density calculating unit 11 calculates the fog density of an input image given, by using a separated illumination light component. A reflectance component fog removing unit 13 performs fog removal on a reflectance component calculated by a reflectance calculating unit. An illumination light component fog removing unit 14 performs fog removal on the separated illumination light component. Here, the degree of fog removal by the reflectance component fog removing unit 13 is higher than the degree of fog removal by the illumination light component fog removing unit 14. Thereby, the degree of fog removal of the reflectance component can be raised without significantly raising the level of fog removal of the illumination light component.
    Type: Grant
    Filed: December 7, 2016
    Date of Patent: October 16, 2018
    Assignee: EIZO Corporation
    Inventor: Takashi Nakamae
  • Patent number: 10089713
    Abstract: A method for registering images aligns a fixed image with a corresponding moving image. A target image and a reference image are selected. The reference and target images are relatively low-res representations of the fixed and moving images respectively. A current image distortion map is determined that defines distortions between the target and reference images. A residual image is determined by comparing the reference image and a warped target image derived from the target image based on the current image distortion map. A residual image distortion map is determined based on transform coefficients of cosine functions fitted to the residual image. The coefficients are determined by applying a DCT to a signal formed by the residual image and image gradients of the warped target or reference image. The current image distortion map is combined with the residual image distortion map to align the fixed image and the moving image.
    Type: Grant
    Filed: March 22, 2016
    Date of Patent: October 2, 2018
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Peter Alleine Fletcher
  • Patent number: 10074029
    Abstract: A detecting unit detects subjects from an image photographed by a first camera and an image photographed by a second camera different from the first camera, a deciding unit decides whether a first subject photographed by the first camera and a second subject photographed by the second camera are a same subject, and a generating unit generates color correction information based on information indicating color of plural sets of the subjects decided as the same subject by the deciding unit. Thus, it is possible to reduce a difference of color between plural cameras even in the case where the photographing ranges of the plural cameras do not overlap.
    Type: Grant
    Filed: January 13, 2016
    Date of Patent: September 11, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Hiroshi Tojo
  • Patent number: 10074163
    Abstract: An image correction method and an image correction apparatus when the image correction method includes: an identifying step of identifying each pixel in an image as a foreground pixel or a background pixel; a background filling step of estimating brightness of a background corresponding to a foreground pixel based on brightness and gradient of the brightness of background pixels adjacent to the foreground pixel to fill the background located in a position of the foreground pixel, to obtain a background illumination map of the image according to filled backgrounds along with background pixels; and a correcting step of correcting the image based on the brightness of each pixel in the image and the background illumination map. A non-uniform illumination image can be corrected effectively.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: September 11, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Jile Jiao, Wei Fan, Jun Sun