Patents Examined by Tsung-Yin Tsai
-
Patent number: 11087872Abstract: A medical scan annotator system is operable to select a medical scan for transmission via a network to a first client device and a second client device for display via an interactive interface, and annotation data is received from the first client device and the second client device in response. Annotation similarity data is generated by comparing the first annotation data to the second annotation data, and consensus annotation data is generated based on the first annotation data and the second annotation data in response to the annotation similarity data indicating that the difference between the first annotation data and the second annotation data compares favorably to an annotation discrepancy threshold. The consensus annotation data is mapped to the medical scan in a medical scan database.Type: GrantFiled: December 11, 2019Date of Patent: August 10, 2021Assignee: Enlitic, Inc.Inventors: Devon Bernard, Kevin Lyman, Li Yao, Brian Basham, Ben Covington
-
Patent number: 11080535Abstract: A facility inspection system prevents a normal part from being detected as an abnormal part caused by a deviation in an alignment due to a presence/absence of a moving object in detecting the abnormal part in a surrounding environment of a vehicle moving on a track. The system includes a photographing device, storage device, separation unit, an alignment unit, and a extraction unit. The photographing device photographs the surrounding environment of the moving vehicle. The storage device stores a reference alignment point cloud and a reference difference-extraction point cloud for each position on the track. The separation unit separates the alignment point cloud from a three-dimensional point cloud. The alignment unit aligns the reference alignment point cloud and the alignment point cloud and outputs alignment information. The extraction unit extracts a difference between the three-dimensional point cloud deformed based on the alignment information and the reference difference-extraction point cloud.Type: GrantFiled: December 1, 2017Date of Patent: August 3, 2021Assignee: HITACHI HIGH-TECH FINE SYSTEMS CORPORATIONInventors: Nobuhiro Chihara, Masahiko Honda, Toshihide Kishi, Masashi Shinbo, Kiyotake Horie
-
Patent number: 11074479Abstract: There is a desire to accurately learn a detection model. Provided is a computer-implemented method including acquiring an input image; acquiring an annotated image designating a region of interest in the input image; inputting the input image to a detection model that generates an output image showing a target region from the input image; calculating an error between the output image and the annotated image, using a loss function that weights an error inside the region of interest more heavily than an error outside the region of interest; and updating the detection model in a manner to reduce the error.Type: GrantFiled: March 28, 2019Date of Patent: July 27, 2021Assignee: International Business Machines CorporationInventors: Takuya Goto, Hiroki Nakano, Masaharu Sakamoto
-
Patent number: 11074472Abstract: Methods, systems and apparatus for detecting and recognizing graphical character representations in image data using symmetrically-located blank areas are disclosed herein. An example disclosed method includes detecting blank areas in image data; identifying, using the processor, a symmetrically-located pair of the blank areas; and designating an area of the image data between the symmetrically-located pair of the blank areas as a candidate region for an image processing function.Type: GrantFiled: November 14, 2017Date of Patent: July 27, 2021Assignee: Symbol Technologies, LLCInventors: Mingxi Zhao, Yan Zhang, Kevin J. O'Connell
-
Patent number: 11074683Abstract: An image inspection apparatus includes an image reader that reads an original image formed on a recording material based on a print job and generates a read image, and a hardware processor that analyzes the read image and performs an image inspection, wherein the hardware processor: acquires the read image from the image reader, detects an edge from the read image, and excludes a region near the edge from a target of the image inspection; performs a predetermined filter process on the read image after the exclusion process to generate a first reference image compares the read image after the exclusion process with the first reference image to generate a first comparison image; and binarizes the first comparison image using a predetermined threshold to detect points where a specific abnormality has occurred, and outputs a detection result.Type: GrantFiled: August 21, 2019Date of Patent: July 27, 2021Assignee: Konica Minolta, Inc.Inventor: Makoto Ikeda
-
Patent number: 11068753Abstract: A computer-implemented method, a computing system, and a computer program product for generating new items compatible with given items may use data associated with a plurality of images and random noise data associated with a random noise image to train an adversarial network including a series of generator networks and a series of discriminator networks corresponding to the series of generator networks by modifying, using a loss function of the adversarial network that depends on a compatibility of the images, one or more parameters of the series of generator networks. The series of generator networks may generate a generated image associated with a generated item different than the given items.Type: GrantFiled: June 13, 2019Date of Patent: July 20, 2021Assignee: Visa International Service AssociationInventors: Ablaikhan Akhazhanov, Maryam Moosaei, Hao Yang
-
Patent number: 11069047Abstract: An image processing method implemented by a computing device is described herein, which includes acquiring an image to be processed and a target style of image, the image to be processed being an image of a second resolution level, and inputting the image to be processed and the target style into a trained image processing neural network for image processing to obtain a target image of the target style, the target image being an image of a first resolution level. The resolution of the image of the first resolution level is higher than that of the image of the second resolution level.Type: GrantFiled: June 20, 2019Date of Patent: July 20, 2021Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Hanwen Liu, Pablo Navarrete Michelini, Lijie Zhang, Dan Zhu
-
Patent number: 11069066Abstract: Techniques described herein address the issue of inadequate view of areas of a crop rectangle for a user while cropping an image. The inadequate view may be due to the user zooming, panning, or rotating the image such that some or all of the crop rectangle may no longer be within view in the graphical user interface. The solution of zoom-loupes provide a view of the corners and user selected points on the edge of the crop rectangle that may be magnified to assist the user to set the crop rectangle area precisely. Additionally, a second crop rectangle can be generated when the entire first/original crop rectangle is unavailable because it is outside the view in the graphical user interface. Using the second crop rectangle, the user may use the second crop rectangle to complete pixel perfect cropping.Type: GrantFiled: August 21, 2019Date of Patent: July 20, 2021Assignee: Adobe Inc.Inventor: Anant Gilra
-
Patent number: 11068695Abstract: An image processing device includes an image processing unit that performs image processing on an observed image in which a cell is imaged and an image processing method selector that is configured to determine an observed image processing method for analyzing the imaged cell on the basis of information of a processed image obtained through image processing of the image processing unit.Type: GrantFiled: September 7, 2018Date of Patent: July 20, 2021Assignee: NIKON CORPORATIONInventors: Hiroaki Kii, Yasujiro Kiyota, Takayuki Uozumi, Yoichi Yamazaki
-
Patent number: 11069068Abstract: To make it possible to cut out an image area corresponding to a document with a high accuracy from an image obtained by scanning a document without forcing a user to perform complicated work or without giving a feeling of discomfort at the time of performing multi-crop processing. To this end, first, position information on the document is acquired by detecting an edge component from a first image obtained by setting high the first white reference value. Then, an image area corresponding to the document is cut out based on the acquired position information from a second image obtained by setting the second white reference value lower than the first white reference value.Type: GrantFiled: May 14, 2019Date of Patent: July 20, 2021Assignee: CANON KABUSHIKI KAISHAInventor: Koya Shimamura
-
Patent number: 11062470Abstract: A depth estimating apparatus operated by at least one processor includes: a database which stores a photographed first color image, a training thermal image geometrically aligned with the first color image, and a second color image simultaneously photographed with the first color image as a training image set; and a training apparatus which trains a neural network in an unsupervised manner to output a chromaticity image and a binocular disparity image from the training thermal image. The training apparatus generates an estimated first color image from the second color image, the chromaticity image, and the binocular disparity image, and trains the neural network to minimize a difference between the estimated first color image and the photographed first color image.Type: GrantFiled: August 24, 2017Date of Patent: July 13, 2021Assignee: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGYInventors: In-So Kweon, Yukyung Choi, Nam Il Kim, Soonmin Hwang
-
Patent number: 11058387Abstract: A radiographic apparatus, comprises a generating unit that generates a composite image using a plurality of radiographic images that have been obtained by a plurality of radiation detecting apparatuses through single radiation irradiation by a radiation generating unit, a determining unit that determines a region to be analyzed so as to eliminate an overlap in an overlapping portion arising from a spatial placement of the plurality of radiation detecting apparatuses, and an obtaining unit that obtains an area dose for the composite image by obtaining an area dose for the region to be analyzed determined by the determining unit.Type: GrantFiled: April 24, 2019Date of Patent: July 13, 2021Assignee: CANON KABUSHIKI KAISHAInventor: Yusuke Niibe
-
Patent number: 11062171Abstract: A data capturing method is provided, which obtains a current image of a target software window according to a handle of the target software window, captures at least one data image from the current image according to at least one target capture area so as to obtain at least one corresponding character image and at least one corresponding representative character from a character image database according to the at least one data image, and outputs the at least one representative character corresponding to the at least one data image, such that the data capturing performing on the target software window would not be affected by the occlusion of other software windows, thereby improving the efficiency of data capturing.Type: GrantFiled: June 5, 2019Date of Patent: July 13, 2021Assignee: WISTRON CORP.Inventor: Gen Kai Wu
-
Patent number: 11055514Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for synthesizing a realistic image with a new expression of a face in an input image by receiving an input image comprising a face having a first expression; obtaining a target expression for the face; and extracting a texture of the face and a shape of the face. The program and method for generating, based on the extracted texture of the face, a target texture corresponding to the obtained target expression using a first machine learning technique; generating, based on the extracted shape of the face, a target shape corresponding to the obtained target expression using a second machine learning technique; and combining the generated target texture and generated target shape into an output image comprising the face having a second expression corresponding to the obtained target expression.Type: GrantFiled: December 14, 2018Date of Patent: July 6, 2021Assignee: Snap Inc.Inventors: Chen Cao, Sergey Tulyakov, Zhenglin Geng
-
Patent number: 11043000Abstract: Provided are a measuring method and apparatus for a damaged part of a vehicle includes: acquiring an image to be processed of a vehicle; acquiring the damaged part of the vehicle in the image to be processed according to the image to be processed; acquiring first position information of key points in the image to be processed according to the image to be processed; determining a transformation relation between the image to be processed and a first fitting plane according to the key points included in the image to be processed and the first position information, where the first fitting plane is a fitting plane determined according to the key points included in the image to be processed on the 3D model; acquiring a projection area of the damaged part in the first fitting plane according to the transformation relation; and measuring the projection area to acquire a measuring result.Type: GrantFiled: September 10, 2019Date of Patent: June 22, 2021Assignee: BAIDU ONLINE NETWORK TECHNOLOGY CO., LTD.Inventors: Yongfeng Zhong, Xiao Tan, Feng Zhou, Hao Sun, Errui Ding
-
Patent number: 11030471Abstract: This application provides a text detection method, including: obtaining, by a computer device, an image; inputting the image into a neural network, and outputting a target feature matrix; inputting the target feature matrix into a fully connected layer, the fully connected layer mapping each element of the target feature matrix to a predicated subregion corresponding to the image according to a preset anchor; and obtaining text feature information of the predicated subregion, connecting the predicated subregion into a corresponding predicted text line according to the text feature information of the predicated subregion by using a text clustering algorithm, and determining a text area corresponding to the image.Type: GrantFiled: September 16, 2019Date of Patent: June 8, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Ming Liu
-
Method and apparatus for occlusion detection on target object, electronic device, and storage medium
Patent number: 11030481Abstract: A method for occlusion detection on a target object is provided. The method includes: determining, based on a pixel value of each pixel in a target image, first positions of a first feature and second positions of a second feature in the target image. The first feature is an outer contour feature of a target object in the target image, the second feature is a feature of an interfering subobject in the target object. The method also includes: determining, based on the first positions, an image region including the target object; dividing, based on the second positions, the image region into at least two detection regions; and determining, according to a pixel value of a target detection region, whether the target detection region meets a preset unoccluded condition, to determine whether the target object is occluded. The target detection region is any one of the at least two detection regions.Type: GrantFiled: August 20, 2019Date of Patent: June 8, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Cheng Yi, Bin Li -
Patent number: 11024050Abstract: A system for linking information of a point of interest to a position within an image of the location may include a portable device structured to determine a position of the point of interest in the image when the portable device is present within the location depicted by the image; an accessory operably coupled to the portable device and comprising a tool structured to provide information related to the point of interest; a processor operably coupled to the portable device and configured to create a data structure linking the information with the position of the point of interest; and a storage structured to store the data structure.Type: GrantFiled: November 5, 2018Date of Patent: June 1, 2021Assignee: FARO TECHNOLOGIES, INC.Inventors: Muhammad Umair Tahir, Oliver Zweigle
-
Patent number: 11017553Abstract: An information processing system of the present invention includes: a specifying unit configured to specify, based on a shot image, position information representing a position of a moving object present in the shot image and identification information for identifying a section of an area where the moving object is located; and a transmitting unit configured to transmit the position information and the identification information to the outside in association with each other. The moving object includes an estimating unit 130 configured to estimate a position of the moving object based on the identification information and the position information that have been transmitted.Type: GrantFiled: May 29, 2019Date of Patent: May 25, 2021Assignee: NEC CORPORATIONInventor: Yasuyuki Nasu
-
Patent number: 11010907Abstract: Techniques to train a model with machine learning and use the trained model to select a bounding box that represents an object are described. For example, a system may implement various techniques to generate multiple bounding boxes for an object in an environment. Each bounding box may be slightly different based on the technique and data used. To select a bounding box that most closely represents an object (or is best used for tracking the object), a model may be trained. The model may be trained by processing sensor data that has been annotated with bounding boxes that represent ground truth bounding boxes. The model may be implemented to select a most appropriate bounding box for a situation (e.g., a given velocity, acceleration, distance, location, etc.). The selected bounding box may be used to track an object, generate a trajectory, or otherwise control a vehicle.Type: GrantFiled: November 27, 2018Date of Patent: May 18, 2021Assignee: Zoox, Inc.Inventor: Carden Taylor Bagwell