Patents Examined by Pardis Sohraby
-
Patent number: 11978217Abstract: A long-term object tracker employs a continuous learning framework to overcome drift in the tracking position of a tracked object. The continuous learning framework consists of a continuous learning module that accumulates samples of the tracked object to improve the accuracy of object tracking over extended periods of time. The continuous learning module can include a sample pre-processor to refine a location of a candidate object found during object tracking, and a cropper to crop a portion of a frame containing a tracked object as a sample and to insert the sample into a continuous learning database to support future tracking.Type: GrantFiled: January 3, 2019Date of Patent: May 7, 2024Assignee: Intel CorporationInventors: Lidan Zhang, Ping Guo, Haibing Ren, Yimin Zhang
-
Explainability and complementary information for camera-based quality assurance inspection processes
Patent number: 11954846Abstract: A video processing pipeline receives data derived from a feed of images of a plurality of objects passing in front of an inspection camera module forming part of a quality assurance inspection system. Quality assurance metrics for the object are generated by one or more containerized image analysis inspection tools forming part of the video processing pipeline using the received data for each object. Overlay images are later generated that characterize the quality assurance metrics. These overlay images are combined with the corresponding image of the object to generate an enhanced image of each of the objects. These enhanced images are provided to a consuming application or process for quality assurance analysis.Type: GrantFiled: June 7, 2021Date of Patent: April 9, 2024Assignee: Elementary Robotics, Inc.Inventors: Dat Do, Arye Barnehama, Daniel Pipe-Mazo -
Patent number: 11948293Abstract: A position of an object is determined by optically capturing at least one capture structure arranged at the object or at a reference object captured from the object and thereby obtaining capture information, the at least one capture structure having a point-symmetrical profile of an optical property that varies along a surface of the capture structure, transforming a location-dependent mathematical function corresponding to the point-symmetrical profile of the optical property into a frequency domain, forming a second frequency-dependent mathematical function from a first frequency-dependent mathematical function, wherein the second mathematical function is formed from a relationship of in each case a real part and an imaginary part of complex function values of the first frequency-dependent mathematical function, and forming at least one function value of the second frequency-dependent mathematical function and determining the same as location information about a location of a point of symmetry of the locatiType: GrantFiled: January 31, 2021Date of Patent: April 2, 2024Assignee: Carl Zeiss Industrielle Messtechnik GmbHInventor: Wolfgang Hoegele
-
Patent number: 11935211Abstract: Systems and methods for image processing are provided in the present disclosure. The systems and methods may obtain an image; determine a current resolution level of the image; determine, based on the current resolution level of the image, from a group of resolution level ranges, a reference resolution level range corresponding to the image; determine a target processing model corresponding to the reference resolution level range; and/or determine a processed image with a target resolution level by processing the image using the target processing model, the target resolution level of the processed image being higher than the current resolution level of the image.Type: GrantFiled: August 23, 2021Date of Patent: March 19, 2024Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Guobin Li, Renkuan Zhai, Meiling Ji
-
Patent number: 11915521Abstract: A method for processing a gaze signal in an eye tracking system is provided. The method comprises receiving a first image of a user's eye captured at a first point in time and a second image of the users eye captured at a second point in time subsequent to the first point in time, and determining, based on the first image and the second image, whether eye movement of the user's eye is in fixation or not. The method may further comprise to, on condition that the eye movement of the users eye is in fixation, applying a filter on the gaze signal, wherein the filter is adapted to decrease variance in the gaze signal.Type: GrantFiled: May 31, 2018Date of Patent: February 27, 2024Assignee: Tobii ABInventor: Johannes Kron
-
Patent number: 11900570Abstract: An image processing system includes a memory configured to store a plurality of reference images used for image quality tuning, an image signal processor configured to receive a plurality of captured images corresponding to the plurality of reference images and configured to generate a plurality of corrected images by being configured to perform a corresponding image processing operation among a plurality of image processing operations, and a tuning module configured to set parameters of the plurality of image processing operations based on the plurality of corrected images and the plurality of reference images.Type: GrantFiled: February 19, 2021Date of Patent: February 13, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Kyounghwan Moon, Jungyeob Chae
-
Patent number: 11900614Abstract: Embodiments of this application disclose a video data processing method and apparatus, and a storage medium. The method includes determining, in response to a trigger operation on a target video, a target pixel in a key video frame, and obtaining multimedia information associated with the target pixel, the key video frame being a video frame in which the trigger operation is located, and the target pixel corresponding to the trigger operation in the key video frame; identifying a trajectory obtaining request corresponding to the target pixel based on location information of the target pixel; obtaining target trajectory information associated with the location information of the target pixel, the target trajectory information comprising location information of the target pixel in a next video frame following the key video frame; and displaying the multimedia information based on the location information of the target pixel in the next video frame.Type: GrantFiled: May 28, 2021Date of Patent: February 13, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yuanli Zheng, Zelong Yin, Nianhua Xie
-
Patent number: 11886450Abstract: A statistical data processing device includes: a first statistical image generation unit for generating statistical images including a first statistical image representing a first statistical value as a corresponding pixel value, and a second statistical image representing a second statistical value as a corresponding pixel value; a mask generation unit for generating a mask image, the mask image extracting, if one of a pixel of a first statistical image and a corresponding pixel of a second statistical image does not have a pixel value indicating a statistical value, a pixel not having a pixel value indicating the statistical value or other pixel; and a second statistical image generation unit for generating a third statistical image in which a pixel value of a pixel not having a pixel value indicating the statistical value is complemented with a pixel value of the other statistical image.Type: GrantFiled: May 9, 2019Date of Patent: January 30, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Xiaojun Wu, Masaki Kitahara, Atsushi Shimizu
-
Patent number: 11887324Abstract: Among other things, techniques are described for cross-modality active learning for object detection. In an example, a first set of predicted bounding boxes and a second set of predicted bounding boxes is generated. The first set of predicted bounding boxes and the second set of predicted bounding boxes are projected into a same representation. The projections are filtered, wherein predicted bounding boxes satisfying a maximum confidence score are selected for inconsistency calculations. Inconsistencies are calculated across the projected bounding boxes based on filtering the projections. An informative scene is extracted based on the calculated inconsistencies. A first object detection neural network or a second object detection neural network is trained using the informative scenes.Type: GrantFiled: June 30, 2021Date of Patent: January 30, 2024Assignee: Motional AD LLCInventors: Kok Seang Tan, Holger Caesar, Oscar Olof Beijbom
-
Patent number: 11869265Abstract: Provided are an object tracking system and an object tracking method.Type: GrantFiled: March 4, 2021Date of Patent: January 9, 2024Assignee: Electronics and Telecommunications Research InstituteInventors: Sungwon Byon, Eun Jung Kwon, Hyunho Park, Won-Jae Shin, Dong Man Jang, Eui Suk Jung, Yong Tae Lee
-
Patent number: 11854210Abstract: To enable high-speed autonomous flight of a flight object. A three-dimensional real-time observation result is generated on the basis of self-position estimation information and three-dimensional distance measurement information. A prior map corresponding to a three-dimensional real-time observation result is acquired. The three-dimensional real-time observation result and the prior map are aligned. After the alignment, the three-dimensional real-time observation result is expanded on the basis of the prior map. A flight route is set on the basis of the three-dimensional real-time observation result having been expanded. In the flight object such as a drone, a somewhat long flight route can be accurately calculated at a time in a global behavior plan, which enables high-speed autonomous flight of the flight object.Type: GrantFiled: October 14, 2020Date of Patent: December 26, 2023Assignee: SONY GROUP CORPORATIONInventors: Masahiko Toyoshi, Kohei Urushido, Shun Lee, Shinichiro Abe, Takuto Motoyama
-
Patent number: 11854255Abstract: A human-object scene recognition method includes: acquiring an input RGB image and a depth image corresponding to the RGB image; detecting objects and humans in the RGB image using a segmentation classification algorithm based on a sample database; in response to detection of objects and/or humans, performing a segment detection to each of the detected objects and/or humans based on the ROB image and the depth image, and acquiring a result of the segment detection; calculating 3D hounding boxes for each of the detected objects and/or humans according to the result of the segment detection, and determining a position of each of the detected objects and/or humans according to the 3D bounding boxes.Type: GrantFiled: July 27, 2021Date of Patent: December 26, 2023Assignee: UBKANG (QINGDAO) TECHNOLOGY CO., LTD.Inventors: Chuqiao Dong, Dan Shao, Zhen Xiu, Dejun Guo, Huan Tan
-
Patent number: 11853352Abstract: A method of establishing an image set for image recognition includes obtaining a single-label image set comprising an image annotated with a single label, and a multi-label image set comprising an image annotated with a plurality of labels; converting content of each label into a corresponding word identifier according to a semantic network, to obtain a word identifier set, a converted single-label image set, and a converted multi-label image set; and constructing a hierarchical semantic structure according to the word identifier set and the semantic network. The method also includes performing label supplementation on the image in the converted single-label image set to obtain a supplemented single-label image set; performing label supplementation on the supplemented single-label image set to obtain a final supplemented image set; and establishing a target multi-label image set to train an image recognition model by using the target multi-label image set.Type: GrantFiled: October 16, 2020Date of Patent: December 26, 2023Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Baoyuan Wu, Weidong Chen, Wei Liu, Yanbo Fan, Tong Zhang
-
Patent number: 11830176Abstract: The present disclosure provides a method of measuring a semiconductor device, including the following operations: obtaining a first image corresponding to a first layer in the semiconductor device; obtaining a second image corresponding to a second layer, below the first layer, in the semiconductor device, wherein the first layer includes at least one first structure and the second layer includes a plurality of second structures that are overlapped by the at least one first structure; generating a third image by combining the first image and the second image; and calculating an offset between the at least one first structure and the plurality of second structures based on the first image and the third image.Type: GrantFiled: September 12, 2021Date of Patent: November 28, 2023Assignee: NANYA TECHNOLOGY CORPORATIONInventors: Kai Lee, Hao-Hsiang Huang, Chih-Chien Huang
-
Patent number: 11823363Abstract: The present invention provides an infrared and visible light fusion method, and belongs to the field of image processing and computer vision. The present invention adopts a pair of infrared binocular camera and visible light binocular camera to acquire images, relates to the construction of a fusion image pyramid and a significant vision enhancement algorithm, and is an infrared and visible light fusion algorithm using multi-scale transform. The present invention uses the binocular cameras and NVIDIATX2 to construct a high-performance computing platform and to construct a high-performance solving algorithm to obtain a high-quality infrared and visible light fusion image. The present invention constructs an image pyramid by designing a filtering template according to different imaging principles of infrared and visible light cameras, obtains image information at different scales, performs image super-resolution and significant enhancement, and finally achieves real-time performance through GPU acceleration.Type: GrantFiled: March 5, 2020Date of Patent: November 21, 2023Assignee: DALIAN UNIVERSITY OF TECHNOLOGYInventors: Risheng Liu, Xin Fan, Jinyuan Liu, Wei Zhong, Zhongxuan Luo
-
Patent number: 11810341Abstract: A computer-implemented method of identifying filters for use in determining explainability of a trained neural network. The method comprises obtaining a dataset comprising the input image and an annotation of an input image, the annotation indicating at least one part of the input image which is relevant for inferring classification of the input image, determining an explanation filter set by iteratively: selecting a filter of the plurality of filters; adding the filter to the explanation filter set; computing an explanation heatmap for the input image by resizing and combining an output of each filter in the explanation filter set to obtain the explanation heatmap, the explanation heatmap having a spatial resolution of the input image; and computing a similarity metric by comparing the explanation heatmap to the annotation of the input image; until the similarity metric is greater than or equal to a similarity threshold; and outputting the explanation filter set.Type: GrantFiled: January 8, 2021Date of Patent: November 7, 2023Assignee: ROBERT BOSCH GMBHInventor: Andres Mauricio Munoz Delgado
-
Patent number: 11810273Abstract: Sharpness at each focus position is calculated as an individual sharpness. Thereafter, one or M (where M is a natural number equal to or greater than 2 and smaller than N) corrective sharpness are calculated and image reference values are determined from the individual sharpness. An luminance value is calculated based on the image reference values and the corrective sharpness for each pixel. An all-in-focus image is generated by combining those luminance values.Type: GrantFiled: August 31, 2021Date of Patent: November 7, 2023Assignee: SCREEN HOLDINGS CO., LTD.Inventor: Takuya Yasuda
-
Patent number: 11810275Abstract: Temporal filtering operations may be reset for certain pixels within an image frame to reduce contribution from previous input frames to reduce ghosting and other artifacts. The resetting reduces the contribution to, for example, zero, either immediately or within a predetermined period of time (e.g., a certain number of frames). A decision regarding whether to reset temporal filtering for a pixel of the image frame may be based on a probability assigned to that pixel. The probability can be based on rules with one or more criteria. One example factor for adjusting probability is a confidence level regarding the temporal filtering decision for the pixel, in which the probability for a random reset of a pixel is based on the confidence level regarding the temporal filtering decision for those pixels.Type: GrantFiled: June 21, 2021Date of Patent: November 7, 2023Assignee: QUALCOMM IncorporatedInventors: Yuri Dolgin, Costia Parfenyev
-
Patent number: 11798150Abstract: A super-resolution reconstruction preprocessing method of contrast-enhanced ultrasound images includes: acquiring an image set to be preprocessed; acquiring grayscale fluctuation signal of a pixel point in the registered contrast-enhanced ultrasound images to be preprocessed; performing denoise reconstruction operation on the image set to be preprocessed to obtain a reconstructed feature parameter image, and performing interpolation calculation on the reconstructed feature parameter image to obtain a sparse microbubble image. By analyzing the grayscale fluctuation signals of the collocated pixel point set in the plurality of frames of the registered contrast-enhanced ultrasound images to be preprocessed, a signal-to-noise ratio and a signal-to-background ratio are improved.Type: GrantFiled: June 10, 2022Date of Patent: October 24, 2023Assignee: Nanjing Transcend Vivoscope Bio-Technology Co., LTDInventors: Jingyi Yin, Jue Zhang
-
Patent number: 11790505Abstract: An information processing apparatus includes a controller configured to determine, upon detection of an impact applied to an object, whether the impact has caused damage to the object, and when it is determined that damage has been caused, identify a cause of the damage, based on a result of observation of a surrounding environment of the object at a time that the impact was applied.Type: GrantFiled: June 15, 2021Date of Patent: October 17, 2023Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Takayuki Oidemizu, Kazuyuki Inoue, Ryosuke Kobayashi, Yurika Tanaka, Tomokazu Maya, Satoshi Komamine