Patents Examined by Woo C Rhim
-
Patent number: 12380541Abstract: Disclosed are a method and apparatus for training an image restoration model, an electronic device, and a computer-readable storage medium. The method for training an image restoration model includes: pre-processing training images to obtain a low-illumination image sample set (110); determining, based on low-illumination image samples in the low-illumination image sample set and the image restoration model, a weight coefficient of the image restoration model (120), wherein the image restoration model is a neural network model determined on a U-Net network and a deep residual network; and adjusting the image restoration model according to the weight coefficient, and further training the adjusted image restoration model using the low-illumination image samples until the image restoration model restores parameters of all the low-illumination image samples in the low-illumination image sample set into a preset range (130).Type: GrantFiled: May 6, 2021Date of Patent: August 5, 2025Assignee: SANECHIPS TECHNOLOGY CO., LTD.Inventors: Jing You, Hengqi Liu, Ke Xu, Dehui Kong, Jisong Ai, Xin Liu, Cong Ren
-
Patent number: 12367545Abstract: A foveated down sampling and correction (FDS-C) circuit for combined down sampling and correction of chromatic aberrations in images. The FDS-C circuit performs down sampling and interpolation of pixel values of a first subset of pixels of a color in a raw image using down sampling scale factors and first interpolation coefficients to generate first corrected pixel values for pixels of the color in a first corrected version of the raw image. The FDS-C circuit further performs interpolation of pixel values of a second subset of the pixels in the first corrected version using second interpolation coefficients to generate second corrected pixel values for pixels of the color in a second corrected version of the raw image. Pixels in the first subset are arranged in a first direction, pixels in the second subset are arranged in a second direction, and the down sampling scale factors vary along the first direction.Type: GrantFiled: August 4, 2023Date of Patent: July 22, 2025Assignee: APPLE INC.Inventors: Chihsin Wu, David R. Pope, Sheng Lin, Amnon D Silverstein
-
Patent number: 12367547Abstract: An apparatus for super resolution imaging includes a convolutional neural network (104) to receive a low resolution frame (102) and generate a high resolution illuminance component frame. The apparatus also includes a hardware scaler (106) to receive the low resolution frame (102) and generate a second high resolution chrominance component frame. The apparatus further includes a combiner (108) to combine the high resolution illuminance component frame and the high resolution chrominance component frame to generate a high resolution frame (110).Type: GrantFiled: February 17, 2020Date of Patent: July 22, 2025Assignee: Intel CorporationInventors: Xiaoxia Cai, Chen Wang, Huan Dou, Yi-Jen Chiu, Lidong Xu
-
Patent number: 12367611Abstract: An audio acquisition device positioning method is provided. In the method, a first image that includes an audio acquisition device is obtained. The audio acquisition device in the first image is identified. First coordinate data of the identified audio acquisition device in the first image is obtained. First displacement data is determined according to the first coordinate data and historical coordinate data of the audio acquisition device. First coordinates of the audio acquisition device are determined according to the first displacement data.Type: GrantFiled: January 11, 2024Date of Patent: July 22, 2025Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Zequn Jie, Zheng Ge, Wei Liu
-
Patent number: 12340392Abstract: The presently disclosed subject matter aims to provide a system and method for detecting potential ad frauds by determining color content resemblance of at least one pixel of an ad media displayed within at least one given placement on a plurality of user devices, and a desired color of the at least one pixel.Type: GrantFiled: May 2, 2022Date of Patent: June 24, 2025Assignee: ANZU VIRTUAL REALITY LTDInventor: Michael Badichi
-
Patent number: 12333741Abstract: No-reference (NR) quality assessment (VQA) of a test visual media input encoding media content is provided. The test video visual media input is decomposed into multiple-channel representations. Domain knowledge is obtained by performing content analysis, distortion analysis, human visual system (HVS) modeling, and/or viewing device analysis. The multiple-channel representations are passed into deep neural networks (DNNs) producing DNN outputs. The DNN outputs are combined using domain knowledge to produce an overall quality score of the test visual media input.Type: GrantFiled: January 25, 2021Date of Patent: June 17, 2025Assignee: IMAX CORPORATIONInventors: Zhou Wang, Jiheng Wang, Hojatollah Yeganeh, Kaiwen Ye, Ahmed Badr, Kai Zeng
-
Patent number: 12333828Abstract: Provided are methods for scalable and realistic camera blockage dataset generation, which can include generating synthetic images depicting a blockage on or near an imaging sensor. The synthetic images may be created by combining one or more chroma key-extracted partial blockage image with one or more background images, the combination of which can provide a scalable blockage dataset. Metadata for each synthetic image can be generated along with the synthetic image, by annotating the portion of the synthetic image represented by the chroma key-extracted partial blockage image as constituting blockage. The synthetic images can be used to increase the accuracy of machine learning models trained to identify blockage by increasing the volume of data available for such training.Type: GrantFiled: August 1, 2022Date of Patent: June 17, 2025Assignee: Motional AD LLCInventors: Pan Yu, You Hong Eng, James Guo Ming Fu, Jiong Yang
-
Patent number: 12299870Abstract: The image inspection apparatus includes an image score calculator and an imaging condition specifier. The image score calculator calculates scores of images, which are produced under different imaging conditions. The imaging condition specifier receives, when two or more thumbnails corresponding to two or more of the images that have a higher score are displayed on the display, selection of one from the two or more thumbnails to specify a set of imaging conditions corresponding to the thumbnail selected. The different imaging conditions include a single-shot set of conditions under which a single-shot image is produced, and a composition series of sets of conditions under which images are captured to produce a composite image from the images. The display shows a reference information display area that indicates reference information representing the single-shot set or composition series of sets in response to the displaying of the two or more thumbnails.Type: GrantFiled: March 4, 2022Date of Patent: May 13, 2025Assignee: KEYENCE CORPORATIONInventors: Yuichiro Hama, Keisuke Fukuta, Nobuyuki Kurihara
-
Patent number: 12288365Abstract: Systems and methods are provided for obtaining a media, the media including an image, audio, video, or combination thereof. An input may be received regarding one or more features or frames of the media to be maintained in or removed from the media. One or more criteria of a lossy compression technique may be inferred based on the received input, using a machine learning model, based on the received input. The inferred criteria of the lossy compression technique may be applied to the media.Type: GrantFiled: December 17, 2021Date of Patent: April 29, 2025Assignee: Palantir Technologies Inc.Inventor: Peter Wilczynski
-
Patent number: 12288370Abstract: A method of operating a wearable electronic device includes: recognizing a first gesture of a hand of a user for setting a region of interest (ROI) corresponding to a view of the user in an image frame corresponding to a view of a camera; generating a virtual display for projecting the ROI based on whether the first gesture is recognized; extracting the ROI from the image frame; recognizing a second gesture of the hand for adjusting a size of the ROI; and adjusting the size of the ROI and projecting the ROI of the adjusted size onto the virtual display, based on whether the second gesture is recognized.Type: GrantFiled: May 3, 2022Date of Patent: April 29, 2025Assignee: Samsung Electronics Co., Ltd.Inventor: Paul Oh
-
Patent number: 12277672Abstract: The present disclosure proposes the use of a dual discriminator network that comprises a temporal discriminator network for discriminating based on temporal features of a series of images and a spatial discriminator network for discriminating based on spatial features of individual images. The training methods described herein provide improvements in computational efficiency. This is achieved by applying the spatial discriminator network to a set of one or more images that have reduced temporal resolution and applying the temporal discriminator network to a set of images that have reduced spatial resolution. This allows each of the discriminator networks to be applied more efficiently in order to produce a discriminator score for use in training the generator, whilst maintaining accuracy of the discriminator network. In addition, this allows a generator network to be trained to more accurately generate sequences of images, through the use of the improved discriminator.Type: GrantFiled: May 22, 2020Date of Patent: April 15, 2025Assignee: DeepMind Technologies LimitedInventors: Aidan Clark, Jeffrey Donahue, Karen Simonyan
-
Patent number: 12266169Abstract: Automated real-time aerial change assessment of targets is provided. An aerial image of a target area is recorded during a flyover and a target detected in the aerial image. A sequence of images of the target area are recorded during a subsequent flyover. The system determines a target detection probability according to confidence scores of the sequence of images and determines a change status of the target. Responsive to a target change, a percentage of change is determined according to image feature matching between first aerial image and each of the images from the second flyover. Target detection probability and percentage of change are combined as statistically independent events to determine a probability of change. The probability of change and percentage of change for each image in the sequence is output in real-time, and final change assessment is output when the aircraft exits the target area.Type: GrantFiled: July 6, 2022Date of Patent: April 1, 2025Assignee: The Boeing CompanyInventors: Yan Yang, Paul Foster
-
Patent number: 12243273Abstract: In one embodiment, a method includes initializing latent codes respectively associated with times associated with frames in a training video of a scene captured by a camera. For each of the frames, a system (1) generates rendered pixel values for a set of pixels in the frame by querying NeRF using the latent code associated with the frame, a camera viewpoint associated with the frame, and ray directions associated with the set of pixels, and (2) updates the latent code associated with the frame and the NeRF based on comparisons between the rendered pixel values and original pixel values for the set of pixels. Once trained, the system renders output frames for an output video of the scene, wherein each output frame is rendered by querying the updated NeRF using one of the updated latent codes corresponding to a desired time associated with the output frame.Type: GrantFiled: January 7, 2022Date of Patent: March 4, 2025Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Zhaoyang Lv, Miroslava Slavcheva, Tianye Li, Michael Zollhoefer, Simon Gareth Green, Tanner Schmidt, Michael Goesele, Steven John Lovegrove, Christoph Lassner, Changil Kim
-
Patent number: 12236608Abstract: One embodiment provides a method comprising generating, via an edge detection algorithm, a first edge map based on a first frame of an input content comprising a sequence of frames. The method further comprises generating, via the edge detection algorithm, a second edge map based on a second frame of the input content. The first frame precedes the second frame in the sequence of frames. The method further comprises determining a difference between the first edge map and the second edge map, determining a metric indicative of an estimated amount of judder present in the input content based on the difference, and dynamically adjusting a frame rate of the input content based on the metric. The input content is displayed on a display device at the adjusted frame rate.Type: GrantFiled: September 22, 2021Date of Patent: February 25, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Berkay Kanberoglu, Hunsop Hong, Seongnam Oh
-
Patent number: 12225176Abstract: A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method does not require camera rectification and depth image calculation.Type: GrantFiled: October 28, 2020Date of Patent: February 11, 2025Assignee: NANJING UNIVERSITYInventors: Xun Cao, Zhihao Huang, Yanru Wang
-
Patent number: 12211191Abstract: An inspection method includes receiving a plurality of training images and an image of a target object obtained from inspection of the target object. The method further includes generating, by one or more training codes, a plurality of inference codes. The one or more training codes are configured to receive the plurality of training images as input and output the plurality of inference codes. The one or more training codes and the plurality of inference codes includes computer executable instructions. The method further includes selecting one or more inference codes from the plurality inference codes based on a user input and/or one or more characteristics of at least a portion of the received plurality of training images. The method also includes inspecting the received image using the one or more inference codes of the plurality of inference codes.Type: GrantFiled: December 16, 2020Date of Patent: January 28, 2025Assignee: Baker Hughes Holdings LLCInventors: Xiaoqing Ge, Dustin Michael Sharber, Jeffrey Potts, Braden Starcher
-
Patent number: 12211248Abstract: A computing system including an edge computing device. The edge computing device may include an edge device processor configured to receive edge device contextual data including computing resource availability data. Based at least in part on the edge device contextual data, the edge device processor may select a processing stage machine learning model of a plurality of processing stage machine learning models and construct a runtime processing pipeline of one or more runtime processing stages including the processing stage machine learning model. The edge device processor may receive a runtime input, and, at the runtime processing pipeline, generate a runtime output based at least in part on the runtime input. The edge device processor may generate runtime pipeline metadata that indicates the one or more runtime processing stages included in the runtime processing pipeline. The edge device processor may output the runtime output and the runtime pipeline metadata.Type: GrantFiled: January 14, 2022Date of Patent: January 28, 2025Assignee: Microsoft Technology Licensing, LLCInventors: Shadi Abdollahian Noghabi, Ranveer Chandra, Krishna Kant Chintalapudi
-
Patent number: 12198311Abstract: A method and an electronic device for managing artifacts of an image includes: receiving an input image and extracting multiple features from the input image. The multiple features include a texture of the input image, a color composition of the input image and edges in the input image. Further, the method includes determining a region of interest (RoI) in the input image including an artifact based on the features and generating an intermediate output image by removing the artifact using multiple generative adversarial networks (GANs). Further, the method includes generating a binary mask using the intermediate output image, the input image, an image illustrating edges in the input image and an image illustrating edges in the intermediate output image and obtaining a final output image by applying the generated binary mask to the input image.Type: GrantFiled: December 7, 2021Date of Patent: January 14, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ashish Chopra, Bhanu Mahto, Sumit Kumar, Vinay Sangwan
-
Patent number: 12190623Abstract: Examples of the present disclosure relate to systems and methods for providing more accurate skin assessments. These assessments can be used to produce custom recommendations that are tailored for the user's skin. Additionally, these example methods address privacy concerns of users, as its small focal distance, limited depth of field, and narrow field of view mean it can only image skin, and not the full face or body.Type: GrantFiled: December 30, 2020Date of Patent: January 7, 2025Assignee: L'OrealInventor: Kyle Yeates
-
Patent number: 12184861Abstract: An encoding device includes: an association processing unit that associates first encoded data which is encoded data of an original image with second encoded data which is encoded data of a decoded image which is a result of decoding of the encoded data of the original image; and an encoding unit that encodes, based on the result of associating the first encoded data with the second encoded data, a target image which is an image to be encoded.Type: GrantFiled: May 10, 2019Date of Patent: December 31, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shinobu Kudo, Shota Orihashi, Ryuichi Tanida, Atsushi Shimizu