Patents Examined by Andrae S Allison
-
Patent number: 12211214Abstract: The present disclosure provides an image processing circuit including a neural network processor, a background processing circuit and a blending circuit. The neural network processor is configured to process input image data to determine whether the input image data has a predetermined object so as to generate to heat map. The background processing circuit blurs the input image data to generate blurred image data. The blending circuit blends the input image data and the blurred image data according to the heat map to generate output image data.Type: GrantFiled: April 22, 2022Date of Patent: January 28, 2025Assignee: SIGMASTAR TECHNOLOGY LTD.Inventors: Jia-Tse Jhang, Yu-Hsiang Lin, Chia-Jen Mo, Lin-Chung Tsai
-
Patent number: 12210587Abstract: A method for training a super-resolution network may include obtaining a low resolution image; generating, using a first machine learning model, a first high resolution image based on the low resolution image; generating, using a second machine learning model, a second high resolution image based on the first high resolution image and an unpaired dataset of high resolution images; obtaining a training data set using the low resolution image and the second high resolution image; and training the super-resolution network using the training data set.Type: GrantFiled: October 27, 2021Date of Patent: January 28, 2025Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Aleksai Levinshtein, Xinyu Sun, Haicheng Wang, Vineeth Subrahmanya Bhaskara, Stavros Tsogkas, Allan Jepson
-
Patent number: 12211280Abstract: Example implementations include a method, apparatus and computer-readable medium of computer vision configured for person classification, comprising receiving, during a first period of time, a plurality of image frames of an environment, identifying images of persons from each frame of the plurality of image frames, and determining a respective vector representation of each of the images. The implementations include generating a probability distribution indicative of a likelihood of a particular vector representation appearing in the plurality of image frames and identifying an associate vector representation by sampling the probability distribution using a probability model. The implementations include determining an input vector representation of an input image identified in an image frame depicting a person and received during a second period of time.Type: GrantFiled: March 7, 2022Date of Patent: January 28, 2025Assignee: Sensormatic Electronics, LLCInventors: Michael C. Stewart, Karthik Jayaraman
-
Patent number: 12190631Abstract: The information processing includes: an acquisition unit for acquiring an image including a face image of a person; and a selection unit for selecting the image in which the part other than the face of a target person is captured among the plurality of images by using the position of the face image of a person other than the target person in the acquired image.Type: GrantFiled: January 21, 2020Date of Patent: January 7, 2025Assignee: NEC CORPORATIONInventor: Daiki Takahashi
-
Patent number: 12190484Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly generating modified digital images utilizing a guided inpainting approach that implements a patch match model informed by a deep visual guide. In particular, the disclosed systems can utilize a visual guide algorithm to automatically generate guidance maps to help identify replacement pixels for inpainting regions of digital images utilizing a patch match model. For example, the disclosed systems can generate guidance maps in the form of structure maps, depth maps, or segmentation maps that respectively indicate the structure, depth, or segmentation of different portions of digital images. Additionally, the disclosed systems can implement a patch match model to identify replacement pixels for filling regions of digital images according to the structure, depth, and/or segmentation of the digital images.Type: GrantFiled: March 15, 2021Date of Patent: January 7, 2025Assignee: Adobe Inc.Inventors: Sohrab Amirghodsi, Lingzhi Zhang, Zhe Lin, Connelly Barnes, Elya Shechtman
-
Patent number: 12190480Abstract: An image synthesis device according to a disclosed embodiment is an image synthesis device has one or more processors and a memory which stores one or more programs executed by the one or more processors. The image synthesis device includes a first artificial neural network model provided to learn each of a first task of using a damaged image as an input to output a restored image and a second task of using an original image as an input to output a reconstructed image, and a second artificial neural network model trained to use the reconstructed image output from the first artificial neural network model as an input and improve the image quality of the reconstructed image.Type: GrantFiled: June 8, 2021Date of Patent: January 7, 2025Assignee: DEEPBRAIN AI INC.Inventors: Gyeong Su Chae, Guem Buel Hwang
-
Patent number: 12182963Abstract: An image processing apparatus processes a color filter mosaic, CFM, image of a scene into a final image of the scene. The image processing apparatus includes processing circuitry configured to implement a neural network. The neural network is configured to process the CFM image into an enhanced CFM image. The processing circuitry is further configured to transform the enhanced CFM image into the final image.Type: GrantFiled: August 27, 2021Date of Patent: December 31, 2024Assignee: Huawei Technologies Co., Ltd.Inventors: Nickolay Dmitrievich Egorov, Elena Alexandrovna Alshina, Marat Ravilevich Gilmutdinov, Dmitry Vadimovich Novikov, Anton Igorevich Veselov, Kirill Aleksandrovich Malakhov
-
Patent number: 12175639Abstract: A video quality improvement method may comprise: inputting a structure feature map converted from current target frame by first convolution layer to first multi-task unit and second multi-task unit, which is connected to an output side of first multi-task unit, among the plurality of multi-task units; inputting a main input obtained by adding the structure feature map to a feature space, which is converted by second convolution layer from those obtained by concatenating, in channel dimension, a previous target frame and a correction frame of the previous frame to first multi-task unit; and inputting current target frame to Nth multi-task unit connected to an end of output side of second multi-task unit, wherein Nth multi-task unit outputs a correction frame of current target frame, and machine learning of the video quality improvement model is performed using an objective function calculated through the correction frame of current target frame.Type: GrantFiled: October 8, 2021Date of Patent: December 24, 2024Assignee: POSTECH RESEARCH AND BUSINESS DEVELOPMENT FOUNDATIONInventors: Seung Yong Lee, Jun Yong Lee, Hyeong Seok Son, Sung Hyun Cho
-
Patent number: 12175360Abstract: As provided herein, a domain model, corresponding to a domain of an image, may be merged with a pre-trained fundamental model to generate a trained fundamental model. The trained fundamental model may comprise a feature description of the image converted into a binary code. Responsive to a user submitting a search query, a coarse image search may be performed, using a search query binary code derived from the search query, to identify a candidate group, comprising one or more images, having binary codes corresponding to the search query binary code. A fine image search may be performed on the candidate group utilizing a search query feature description derived from the search query. The fine image search may be used to rank images within the candidate group based upon a similarity between the search query feature description and feature descriptions of the one or more images within the candidate group.Type: GrantFiled: July 21, 2020Date of Patent: December 24, 2024Assignee: Verizon Patent and Licensing Inc.Inventor: JenHao Hsiao
-
Patent number: 12169620Abstract: A method, apparatus and system for video display and a camera are disclosed. The camera includes one wide-field lens assembly and a wide-field sensor corresponding to the wide-field lens assembly; at least one narrow-field lens assembly and narrow-field sensor corresponding to the narrow-field lens assembly, wherein an angle of view of the wide-field lens assembly is greater than an angle of view of the narrow-field lens assembly, and for a same target, a definition of the wide-field sensor is smaller than that of the narrow-field sensor; and a processor configured for performing human body analysis on the wide-field image and performing face analysis, head and shoulder analysis or human body analysis on at least one frame of narrow-field image. The methods, apparatuses and systems can reduce the workload of installing and adjusting the cameras during monitoring, the performance requirements for the server, and monitoring costs.Type: GrantFiled: January 16, 2020Date of Patent: December 17, 2024Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.Inventor: Wenwei Li
-
Patent number: 12165294Abstract: A computer-implemented method for high resolution image inpainting comprising the following steps: providing a high resolution input image, providing at least one inpainting mask, selecting at least one rectangular sub-region of the input image and at least one aligned rectangular subregion of the inpainting mask such that the rectangular subregion of the input image encompasses at least one set of pixels to be removed and synthetized, the at least one sub-region of the input image and its corresponding aligned subregion of the inpainting mask having identical minimum possible size and a position for which a calculated information gain does not decrease, processing the sub-region of the input image and its corresponding aligned subregion of the inpainting mask by a machine learning model, generating an output high resolution image comprising the inpainted sub-region.Type: GrantFiled: March 24, 2020Date of Patent: December 10, 2024Assignee: TCL RESEARCH EUROPE SP. Z O. O.Inventors: Michal Kudelski, Tomasz Latkowski, Filip Skurniak, Lukasz Sienkiewicz, Piotr Frankowski, Bartosz Biskupski
-
Patent number: 12158410Abstract: The present disclosure relates to a real-time quantification method of cell viability through a supravital dye uptake using a lens-free imaging system. The method includes a step of incubating a sample cell in a cell culture medium, steps of detecting light penetrating the cell culture medium and identifying a boundary region of the sample cell at a preset time interval based on the detected light, a step of staining the incubated sample cell with the supravital dye, a step of detecting intensity of light penetrating the cell culture medium at a preset time interval, a step of calculating absorbance of the sample cell included in the cell culture medium at a preset time interval based on the boundary region and the detected intensity of light and a step of analyzing a viability of the sample cell based on the calculated absorbance.Type: GrantFiled: November 30, 2020Date of Patent: December 3, 2024Assignees: SOL INC., Government of the United States of America, as Represented by the Secretary of CommerceInventors: Jong Muk Lee, Darwin R. Reyes-Hernandez, Brian J. Nablo
-
Patent number: 12154302Abstract: Disclosed herein are a method, an apparatus and a storage medium for image encoding/decoding using a binary mask. An encoding method includes generating a latent vector using an input image, generating a selected latent vector component set using a binary mask, and generating a main bitstream by performing entropy encoding on the selected latent vector component set. A decoding method includes generating a selected latent vector component set including one or more selected latent vector components by performing entropy decoding on a main bitstream and generating the latent vector in which the one or more selected latent vector components are relocated by relocating the selected latent vector component set in the latent vector.Type: GrantFiled: December 3, 2021Date of Patent: November 26, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Joo-Young Lee, Se-Yoon Jeong, Hyoung-Jin Kwon, Dong-Hyun Kim, Youn-Hee Kim, Jong-Ho Kim, Ji-Hoon Do, Jin-Soo Choi, Tae-Jin Lee
-
Patent number: 12154256Abstract: A discriminator of a training model is trained to discriminate between original training images without artificial subsurface data and modified training images with artificial subsurface data. A generator of the training model is trained to: replace portions of original training images with the artificial subsurface data to form the modified training images, and prevent the discriminator from discriminating between the original training images and the modified training images.Type: GrantFiled: October 26, 2023Date of Patent: November 26, 2024Assignee: SCHLUMBERGER TECHNOLOGY CORPORATIONInventors: Kishore Mulchandani, Abhishek Gupta
-
Patent number: 12154382Abstract: An eye state detecting method, applied to an electronic apparatus with an image sensor, which comprises: (a) acquiring a detecting image via the image sensor; (b) defining a face range on the detecting image; (c) defining a determining range on the face range; and (d) determining if the determining range comprises an open eye image or a close eye image.Type: GrantFiled: October 29, 2020Date of Patent: November 26, 2024Assignee: PixArt Imaging Inc.Inventor: Guo-Zhen Wang
-
Patent number: 12148200Abstract: A method for processing an image includes acquiring an input image, performing down-sampling and feature extraction on the input image by an encoder network to obtain multiple feature maps, and performing up-sampling and feature extraction on the multiple feature maps by a decoder network to obtain a target segmentation image. Processing levels between the encoder network and the decoder network for outputting feature maps with the same resolution are connected with each other. The encoder network and the decoder network each includes one or more dense calculation blocks, and at least one convolution module in any dense computation block includes at least one group of asymmetric convolution kernels.Type: GrantFiled: December 29, 2020Date of Patent: November 19, 2024Assignee: BOE Technology Group Co., Ltd.Inventors: Yunhua Lu, Hanwen Liu, Pablo Navarrete Michelini, Lijie Zhang, Dan Zhu
-
Patent number: 12148186Abstract: A method is provided. The method includes receiving an input image, extracting at least one feature from the input image, determining at least one local tone curve for a portion of the input image based on the extracted at least one feature, the portion of the input image being less than an overall area of the input image, and generating a toned image based on the at least one local tone curve.Type: GrantFiled: December 30, 2021Date of Patent: November 19, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Abdelrahman Abdelhamed, Luxi Zhao, Michael Scott Brown
-
Patent number: 12147495Abstract: A visual search system facilitates retrieval of provenance information using a machine learning model to generate content fingerprints that are invariant to benign transformations while being sensitive to manipulations. The machine learning model is trained on a training image dataset that includes original images, benign transformed variants of the original images, and manipulated variants of the original images. A loss function is used to train the machine learning model to minimize distances in an embedding space between benign transformed variants and their corresponding original images and increase distances between the manipulated variants and their corresponding original images.Type: GrantFiled: January 5, 2021Date of Patent: November 19, 2024Assignee: ADOBE INC.Inventors: Viswanathan Swaminathan, John Philip Collomosse, Eric Nguyen
-
Patent number: 12148060Abstract: A vehicle having one or more cameras, configured to record one or more images of a person approaching the vehicle. The camera(s) can be configured to send biometric data derived from the image(s). The vehicle can include a computing system configured to receive the biometric data and to determine a risk score of the person based on the received biometric data and an AI technique, such as an ANN or a decision tree. The received biometric data or a derivative thereof can be input for the AI technique. The computing system can also be configured to determine whether to notify a driver of the vehicle of the risk score based on the risk score exceeding a risk threshold. The vehicle can also include a user interface, configured to output the risk score to notify the driver when the computing system determines the risk score exceeds the risk threshold.Type: GrantFiled: October 20, 2022Date of Patent: November 19, 2024Assignee: Lodestar Licensing Group LLCInventor: Robert Richard Noel Bielby
-
Patent number: 12148223Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.Type: GrantFiled: April 28, 2022Date of Patent: November 19, 2024Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Arjun Bhargava, Chao Fang, Charles Christopher Ochoa, Kun-Hsin Chen, Kuan-Hui Lee, Vitor Guizilini