Patents Examined by Chan S. Park
-
Patent number: 11182905Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.Type: GrantFiled: March 20, 2020Date of Patent: November 23, 2021Assignee: ADOBE INC.Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
-
Patent number: 11176378Abstract: An image receiving unit receives an input of an image set owned by a first user, an image analyzing unit analyzes each image included in the image set, a tag information setting unit sets tag information items to be assigned to each image based on an analyzing result of each image, and a tag information assigning unit assigns, as main tag information, the tag information, among the tag information items to be assigned to the image, for which a ratio of the number of times of appearances of the tag information to be assigned to the image to the total number of times of appearances of all the tag information items to be assigned to all the images included in the image set is equal to or greater than a first threshold value and is equal to or less than a second threshold value, to the image, for each image.Type: GrantFiled: September 9, 2019Date of Patent: November 16, 2021Assignee: FUJIFILM CorporationInventor: Masaya Usuki
-
Patent number: 11170521Abstract: In an exemplary process for determining a position of an object in a computer-generated reality environment using an eye gaze, a user uses their eyes to interact with user interface objects displayed on an electronic device. A first direction of gaze is determined for a first eye of a user detected via the one or more cameras, and a second direction of gaze is determined for a second eye of the user detected via the one or more cameras. A convergence point of the first and second directions of gaze is determined, and a distance between a position of the user and a position of an object in the computer-generated reality environment is determined based on the convergence point. A task is performed based on the determined distance between the position of the user and the position of the object in the computer-generated reality environment.Type: GrantFiled: August 5, 2019Date of Patent: November 9, 2021Assignee: Apple Inc.Inventors: Mohamed Selim Ben Himane, Anselm Grundhöfer
-
Patent number: 11164323Abstract: The present disclosure provides a method for obtaining image tracking points. The method can be applied to an image tracking point obtaining device, the method includes obtaining, when a current video frame comprises a first image of a target object, a second image of the target object and determining a position of a second feature point on the second image; obtaining, on the first image, a first feature point corresponding to the second feature point; obtaining a first area to which the first feature point belongs in the first image, and obtaining a second area to which the second feature point belongs in the second image; and determining the first feature point as a tracking point of the current video frame when an image similarity between the first area and the second area meets a screening condition.Type: GrantFiled: November 8, 2019Date of Patent: November 2, 2021Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Xiaoming Xiang, Hao Xu, Ting Lu, Chengzhuo Zou
-
Patent number: 11164320Abstract: A corresponding region movement amount calculation unit calculates the amount of movement of each of a plurality of corresponding characteristic regions between a reference image and a base image. A clustering processing unit groups one or more characteristic regions exhibiting a substantially identical tendency in the calculated amounts of movement as belonging to a plane group located on the same plane, and classifies the plurality of characteristic regions in one or more plane groups. A projection transform matrix calculation unit calculates one or more projection transform matrices, by using the amounts of movement of the characteristic regions and the result of the grouping performed by the clustering processing unit.Type: GrantFiled: June 12, 2020Date of Patent: November 2, 2021Assignee: OLYMPUS CORPORATIONInventor: Atsuro Okazawa
-
Patent number: 11151703Abstract: An embodiment of the invention may include a method, computer program product and computer system for image artifact removal. The method, computer program product and computer system may include computing device which may receive a primary image and analyze the primary image for global artifacts and local artifacts. The computing device may, in response to identifying a global artifact in the primary image, generate a secondary image with the global artifact removed utilizing a first generative adversarial network. The computing device may, in response to identifying a local artifact in the primary image, generate a patch with the local artifact removed utilizing a second generative adversarial network. The computing device may generate a hybrid image containing a reduction of global artifacts and a reduction of local artifacts by combining the secondary image and the patch utilizing a hybrid generative adversarial network.Type: GrantFiled: September 12, 2019Date of Patent: October 19, 2021Assignee: International Business Machines CorporationInventors: Dustin Michael Sargent, Sun Young Park, Maria Victoria Sainz de Cea, David Richmond
-
Patent number: 11151712Abstract: A method for detecting image defects is described, which includes obtaining an image to be detected, down-sampling the image to be detected to obtain a down-sampled image, de-cluttering the down-sampled image to obtain a de-cluttered image, restoring the de-cluttered image into a restored image having the same resolution as the image to be detected so as to be used as a background image, and comparing the image to be detected with the background image to determine defects in the image to be detected. An apparatus for detecting image defects, a computing device and a storage medium are also described.Type: GrantFiled: August 30, 2019Date of Patent: October 19, 2021Assignees: BEIJING BOE OPTOELECTRONICS TECHNOLOGY CO., LTD., BOE TECHNOLOGY GROUP CO., LTD.Inventors: Xiaolei Liu, Lili Chen, Yunqi Wang, Minglei Chu
-
Patent number: 11151698Abstract: The present disclosure relates to an image processing apparatus and a method that allow suppression of a reduction in the subjective image quality. An inverse filter of a filter configured to transform an input image to be projected by a plurality of projection sections into a projection image projected by the plurality of projection sections is generated on the basis of an individual-blur amount and a superimposition-blur amount. The individual-blur amount indicates a magnitude of optical blur generated in an individual projection image projected by each of the plurality of projection sections. The superimposition-blur amount indicates a magnitude of optical blur generated from superimposition of a plurality of the projection images. The input image is transformed using the generated inverse filter to generate a phase image.Type: GrantFiled: March 15, 2018Date of Patent: October 19, 2021Assignee: SONY CORPORATIONInventor: Takaaki Suzuki
-
Patent number: 11151735Abstract: A deformation processing support system acquires target shape data of a work having a reference line; acquires intermediate shape data from the work in an intermediate shape having a reference line marked thereon; and overlaps the two data on each other by aligning the reference lines relative to each other, to calculate a necessary deformation amount of the work based on a difference between the two data overlapped on each other. To align the reference lines with each other, first and second alignment axes with the same length calculated for the respective reference lines are superimposed on each other. Subsequently, the intermediate shape data is relatively rotated with respect to the target shape data around the first alignment axis.Type: GrantFiled: March 26, 2018Date of Patent: October 19, 2021Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHAInventors: Atsuki Nakagawa, Shinichi Nakano, Naohiro Nakamura
-
Patent number: 11113833Abstract: An object detection system includes a depth image detector and a moving object extractor. The depth image detector detects a depth image from an external environment. The moving object extractor extracts a moving object desired to be extracted from the depth image. The moving object extractor registers in advance the depth image in a memory as a background while the moving object to be extracted does not exist, and extracts only a pixel whose current depth is present on a nearer side than a depth of the background as a candidate for a pixel corresponding to the moving object to be extracted.Type: GrantFiled: March 5, 2018Date of Patent: September 7, 2021Assignee: KONICA MINOLTA, INC.Inventor: Shunsuke Takamura
-
Patent number: 11113851Abstract: A method for correcting image data from a differential phase contrast imaging system is provided. Data comprising distorted data due to spatial variation is obtained. The data is corrected by correcting the distorted data.Type: GrantFiled: July 17, 2019Date of Patent: September 7, 2021Assignee: The Board of Trustees of the Leland Stanford Junior UniversityInventors: Ching-wei Chang, Lambertus Hesselink
-
Patent number: 11113801Abstract: Devices, methods, and computer-readable media describing an adaptive approach to reference image selection are disclosed herein, e.g., to generate fused images with reduced motion distortion. More particularly, an incoming image stream may be obtained from an image capture device, which image stream may comprise a variety of different image captures, e.g., including “image frame pairs” (IFPs) that are captured consecutively, wherein the images in a given IFP are captured with differing exposure settings. When a capture request is received at the image capture device, the image capture device may select two or more images from the incoming image stream for fusion, e.g., including at least one IFP. In some embodiments, one of the images from the at least one IFP will be designated as the reference image for a fusion operation, e.g., based on a robust motion detection analysis process performed on the images of the at least one IFP.Type: GrantFiled: September 6, 2019Date of Patent: September 7, 2021Assignee: Apple Inc.Inventor: Wu Cheng
-
Patent number: 11107226Abstract: A system includes sensors and a tracking subsystem. The subsystem tracks first and second objects in a space. Following a collision event between the first and second object, a top-view image of the first object is received from a first sensor. Based on the top-view image, a first descriptor is determined for the first object. The first descriptor is associated with an observable characteristic of the first object. If criteria are not satisfied for distinguishing the first object from the second object based on the first descriptor, a third descriptor is determined for the first object. The third descriptor is generated by an artificial neural network configured to identify objects in top-view images. The tracking subsystem uses the third descriptor to assign an identifier to the first object.Type: GrantFiled: October 25, 2019Date of Patent: August 31, 2021Assignee: 7-ELEVEN, INC.Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy, Madan Mohan Chinnam, Crystal Maung
-
Patent number: 11100646Abstract: A method for generating a predicted segmentation map for potential objects in a future scene depicted in a future image is described. The method includes receiving input images that depict a same scene; processing a current input image to generate a segmentation map for potential objects in the current input image and a respective depth map; generating a point cloud for the current input image; processing the input images to generate, for each pair of two input images in the sequence, a respective ego-motion output that characterizes motion of the camera between the two input images; processing the ego-motion outputs to generate a future ego-motion output; processing the point cloud of the current input image and the future ego-motion output to generate a future point cloud; and processing the future point cloud to generate the predicted segmentation map for potential objects in the future scene depicted in the future image.Type: GrantFiled: September 6, 2019Date of Patent: August 24, 2021Assignee: Google LLCInventors: Suhani Vora, Reza Mahjourian, Soeren Pirk, Anelia Angelova
-
Patent number: 11100671Abstract: An image generation apparatus includes an image acquisition part that acquires a wide angle image group and a telephoto image group in which a subject is imaged while changing a position of an imaging apparatus, the wide angle image group being captured by the imaging apparatus including an imaging optical system including a wide angle optical system and a telephoto optical system having a common optical axis, and the telephoto image group being captured at the same time as the wide angle image group; a composition information acquisition part that analyzes the acquired wide angle image group and acquires composition information to be used for compositing the telephoto image group; and a composite image generation part that generates an image in which the telephoto image group is composited, information related to focal lengths of the wide angle optical system and the telephoto optical system, and the telephoto image group.Type: GrantFiled: October 16, 2019Date of Patent: August 24, 2021Assignee: FUJIFILM CorporationInventor: Shuji Ono
-
Patent number: 11087481Abstract: A method for detecting dimension of box based on depth map includes: receiving a depth map generated by a camera, the depth map corresponds to pixels of an image including a box; performing a coordinate transformation to transform the depth map into camera coordinates of each of the pixels; dividing some of the pixels into plural blocks, each of blocks includes a number of the pixels adjacent to each other; statistically analyzing an average normal vector of each of the blocks according to the camera coordinates of the pixels of each of the blocks; classifying the blocks into plural clusters according to the average normal vector of each of the blocks; performing a plane extraction to obtain edge vectors according to plane formulas of the clusters; obtaining vertexes of the box according to the edge vectors; and obtaining a dimension of the box according the vertexes.Type: GrantFiled: January 8, 2020Date of Patent: August 10, 2021Assignee: HIMAX TECHNOLOGIES LIMITEDInventor: Ying-Han Yeh
-
Patent number: 11069032Abstract: A system and method of removing turbulence from an image of a time ordered sequence of image frames. The method comprises removing effects of turbulence from a first image of the sequence to create an initial corrected image frame; determining a number of iterations required to achieve a desired turbulence removal for a subsequent image in the sequence and satisfy a latency constraint and an available memory capacity; and determining, based on the number of required iterations, a minimum set of image frames required to remove turbulence from the subsequent image. The minimum set of image frames comprises: a number of image frames of the sequence, a number of image frames generated in an intermediate iteration of turbulence removal and the initial corrected image frame. The method further comprises using the minimum set of image frames to remove turbulence from the subsequent image of the sequence.Type: GrantFiled: August 26, 2019Date of Patent: July 20, 2021Assignee: Canon Kabushiki KaishaInventors: Ruimin Pan, Philip John Cox
-
Patent number: 11069067Abstract: Hand segmentation on wearable devices is a challenging computer vision problem with a complex background because of varying illumination conditions, computational capacity of device(s), different skin tone of users from varied race, and presence of skin color background. The present application provides systems and methods for performing, in real time, hand segmentation by pre-processing an input image to improve contrast and removing noise/artifacts. Multi Orientation Matched Filter (MOMF) is implemented and applied on the pre-processed image by rotating the MOMF at various orientations to form an edge image which comprises strong edges and weak edges. Weak edges are further removed using morphological operation. The edge image is then added to the input image (or pre-processed image) to separate different texture region in image. Largest skin-color blob is then extracted which is considered to be correct segmented hand.Type: GrantFiled: August 7, 2019Date of Patent: July 20, 2021Assignee: Tate Consultancy Services LimitedInventors: Jitender Kumar Maurya, Ramya Hebbalaguppe, Puneet Gupta
-
Patent number: 11060981Abstract: Samples at a semiconductor wafer that have been reviewed by a review tool may be identified. Furthermore, a candidate sample at the semiconductor wafer that has not been reviewed by the review tool may be identified. A location of the candidate sample at the semiconductor wafer may and a number of the samples that have been reviewed that are at locations proximate to the location of the candidate sample may be determined. The candidate sample may be selected for review by the review tool based on the number of the plurality of samples that are at locations proximate to the location of the candidate sample.Type: GrantFiled: March 20, 2018Date of Patent: July 13, 2021Assignee: APPLIED MATERIALS ISRAEL LTD.Inventors: Ariel Hirszhorn, Yotam Sofer
-
Patent number: 11048974Abstract: A method of training a generator G of a Generative Adversarial Network (GAN) includes generating a real contextual data set {x1, . . . , xN} for a high resolution image Y; generating a generated contextual data set {g1, . . . , gN} for a generated high resolution image G(Z); calculating a perceptual loss Lpcept value using the real contextual data set {x1, . . . , xN} and the generated contextual data set {g1, . . . , gN}; and training the generator G using the perceptual loss Lpcept value. The generated high resolution image G(Z) is generated by the generator G of the GAN in response to receiving an input Z, where the input Z is a random sample that corresponds to the high resolution image Y.Type: GrantFiled: August 5, 2019Date of Patent: June 29, 2021Assignee: Agora Lab, Inc.Inventors: Sheng Zhong, Shifu Zhou