Patents Examined by Andrae S Allison
-
Patent number: 12169620Abstract: A method, apparatus and system for video display and a camera are disclosed. The camera includes one wide-field lens assembly and a wide-field sensor corresponding to the wide-field lens assembly; at least one narrow-field lens assembly and narrow-field sensor corresponding to the narrow-field lens assembly, wherein an angle of view of the wide-field lens assembly is greater than an angle of view of the narrow-field lens assembly, and for a same target, a definition of the wide-field sensor is smaller than that of the narrow-field sensor; and a processor configured for performing human body analysis on the wide-field image and performing face analysis, head and shoulder analysis or human body analysis on at least one frame of narrow-field image. The methods, apparatuses and systems can reduce the workload of installing and adjusting the cameras during monitoring, the performance requirements for the server, and monitoring costs.Type: GrantFiled: January 16, 2020Date of Patent: December 17, 2024Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD.Inventor: Wenwei Li
-
Patent number: 12165294Abstract: A computer-implemented method for high resolution image inpainting comprising the following steps: providing a high resolution input image, providing at least one inpainting mask, selecting at least one rectangular sub-region of the input image and at least one aligned rectangular subregion of the inpainting mask such that the rectangular subregion of the input image encompasses at least one set of pixels to be removed and synthetized, the at least one sub-region of the input image and its corresponding aligned subregion of the inpainting mask having identical minimum possible size and a position for which a calculated information gain does not decrease, processing the sub-region of the input image and its corresponding aligned subregion of the inpainting mask by a machine learning model, generating an output high resolution image comprising the inpainted sub-region.Type: GrantFiled: March 24, 2020Date of Patent: December 10, 2024Assignee: TCL RESEARCH EUROPE SP. Z O. O.Inventors: Michal Kudelski, Tomasz Latkowski, Filip Skurniak, Lukasz Sienkiewicz, Piotr Frankowski, Bartosz Biskupski
-
Patent number: 12158410Abstract: The present disclosure relates to a real-time quantification method of cell viability through a supravital dye uptake using a lens-free imaging system. The method includes a step of incubating a sample cell in a cell culture medium, steps of detecting light penetrating the cell culture medium and identifying a boundary region of the sample cell at a preset time interval based on the detected light, a step of staining the incubated sample cell with the supravital dye, a step of detecting intensity of light penetrating the cell culture medium at a preset time interval, a step of calculating absorbance of the sample cell included in the cell culture medium at a preset time interval based on the boundary region and the detected intensity of light and a step of analyzing a viability of the sample cell based on the calculated absorbance.Type: GrantFiled: November 30, 2020Date of Patent: December 3, 2024Assignees: SOL INC., Government of the United States of America, as Represented by the Secretary of CommerceInventors: Jong Muk Lee, Darwin R. Reyes-Hernandez, Brian J. Nablo
-
Patent number: 12154382Abstract: An eye state detecting method, applied to an electronic apparatus with an image sensor, which comprises: (a) acquiring a detecting image via the image sensor; (b) defining a face range on the detecting image; (c) defining a determining range on the face range; and (d) determining if the determining range comprises an open eye image or a close eye image.Type: GrantFiled: October 29, 2020Date of Patent: November 26, 2024Assignee: PixArt Imaging Inc.Inventor: Guo-Zhen Wang
-
Patent number: 12154256Abstract: A discriminator of a training model is trained to discriminate between original training images without artificial subsurface data and modified training images with artificial subsurface data. A generator of the training model is trained to: replace portions of original training images with the artificial subsurface data to form the modified training images, and prevent the discriminator from discriminating between the original training images and the modified training images.Type: GrantFiled: October 26, 2023Date of Patent: November 26, 2024Assignee: SCHLUMBERGER TECHNOLOGY CORPORATIONInventors: Kishore Mulchandani, Abhishek Gupta
-
Patent number: 12154302Abstract: Disclosed herein are a method, an apparatus and a storage medium for image encoding/decoding using a binary mask. An encoding method includes generating a latent vector using an input image, generating a selected latent vector component set using a binary mask, and generating a main bitstream by performing entropy encoding on the selected latent vector component set. A decoding method includes generating a selected latent vector component set including one or more selected latent vector components by performing entropy decoding on a main bitstream and generating the latent vector in which the one or more selected latent vector components are relocated by relocating the selected latent vector component set in the latent vector.Type: GrantFiled: December 3, 2021Date of Patent: November 26, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Joo-Young Lee, Se-Yoon Jeong, Hyoung-Jin Kwon, Dong-Hyun Kim, Youn-Hee Kim, Jong-Ho Kim, Ji-Hoon Do, Jin-Soo Choi, Tae-Jin Lee
-
Patent number: 12147495Abstract: A visual search system facilitates retrieval of provenance information using a machine learning model to generate content fingerprints that are invariant to benign transformations while being sensitive to manipulations. The machine learning model is trained on a training image dataset that includes original images, benign transformed variants of the original images, and manipulated variants of the original images. A loss function is used to train the machine learning model to minimize distances in an embedding space between benign transformed variants and their corresponding original images and increase distances between the manipulated variants and their corresponding original images.Type: GrantFiled: January 5, 2021Date of Patent: November 19, 2024Assignee: ADOBE INC.Inventors: Viswanathan Swaminathan, John Philip Collomosse, Eric Nguyen
-
Patent number: 12148186Abstract: A method is provided. The method includes receiving an input image, extracting at least one feature from the input image, determining at least one local tone curve for a portion of the input image based on the extracted at least one feature, the portion of the input image being less than an overall area of the input image, and generating a toned image based on the at least one local tone curve.Type: GrantFiled: December 30, 2021Date of Patent: November 19, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Abdelrahman Abdelhamed, Luxi Zhao, Michael Scott Brown
-
Patent number: 12148060Abstract: A vehicle having one or more cameras, configured to record one or more images of a person approaching the vehicle. The camera(s) can be configured to send biometric data derived from the image(s). The vehicle can include a computing system configured to receive the biometric data and to determine a risk score of the person based on the received biometric data and an AI technique, such as an ANN or a decision tree. The received biometric data or a derivative thereof can be input for the AI technique. The computing system can also be configured to determine whether to notify a driver of the vehicle of the risk score based on the risk score exceeding a risk threshold. The vehicle can also include a user interface, configured to output the risk score to notify the driver when the computing system determines the risk score exceeds the risk threshold.Type: GrantFiled: October 20, 2022Date of Patent: November 19, 2024Assignee: Lodestar Licensing Group LLCInventor: Robert Richard Noel Bielby
-
Patent number: 12148200Abstract: A method for processing an image includes acquiring an input image, performing down-sampling and feature extraction on the input image by an encoder network to obtain multiple feature maps, and performing up-sampling and feature extraction on the multiple feature maps by a decoder network to obtain a target segmentation image. Processing levels between the encoder network and the decoder network for outputting feature maps with the same resolution are connected with each other. The encoder network and the decoder network each includes one or more dense calculation blocks, and at least one convolution module in any dense computation block includes at least one group of asymmetric convolution kernels.Type: GrantFiled: December 29, 2020Date of Patent: November 19, 2024Assignee: BOE Technology Group Co., Ltd.Inventors: Yunhua Lu, Hanwen Liu, Pablo Navarrete Michelini, Lijie Zhang, Dan Zhu
-
Patent number: 12148223Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.Type: GrantFiled: April 28, 2022Date of Patent: November 19, 2024Assignees: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Arjun Bhargava, Chao Fang, Charles Christopher Ochoa, Kun-Hsin Chen, Kuan-Hui Lee, Vitor Guizilini
-
Patent number: 12136277Abstract: Methods, apparatus, and system to acquire aircraft flight data through visual analysis of flight instruments with a data capture neural network, to compare aircraft flight data to a standard of a flight maneuver with a data analysis neural network with minimal or no human input, to output a visualization of aircraft flight data and or analysis of aircraft flight data, and to acquire aircraft flight data, programmatically analyze aircraft flight data, and provide aircraft training to a prospective aircraft pilot.Type: GrantFiled: December 7, 2021Date of Patent: November 5, 2024Inventors: Mahdi Al-Husseini, Joshua Barnett, Anthony Chen, Joseph Divyan Thomas
-
Patent number: 12137203Abstract: Disclosed are a method and system for optical calibration of a 3D printer. The method includes: projecting, by an optical apparatus, a projection image to a projection platform, placing a calibration plate on the projection platform, and capturing the projection platform; identifying the coordinates of calibration points and the coordinates of actual projection points according to the captured image to obtain a matrix of calibration points and a matrix of actual projection points; rotating and translating the matrix of the calibration points and/or the matrix of the actual projection points, and calculating a distance value T0 between the calibration points and the actual projection points in an image coordinate system; converting the T0 in the image coordinate system into an offset C1 in a pixel coordinate system, and inversely distorting an initial ideal projection image according to the offset C1 to offset optical distortion.Type: GrantFiled: December 8, 2020Date of Patent: November 5, 2024Assignee: GUANGZHOU HEYGEARS IMC. INCInventors: Xin Wan, Weitao Li, Peihui Wu, Songlin She, Peiyan Gui, Heyuan Huang
-
Patent number: 12135312Abstract: A method of displaying stress distribution on a sample surface includes: step S4 of capturing images of the sample surface before loading, during the loading, and after unloading; step S5 of measuring a first strain amount for each pixel position based on correlation between the image before the loading and the image after the unloading; step S6 of measuring a second strain amount for each pixel position based on correlation between the image before the loading and the image during the loading; step S7 of calculating stress for each pixel position based on the difference between the first strain amount and the second strain amount; and step S8 of displaying the distribution of the calculated stress at each pixel position.Type: GrantFiled: June 15, 2020Date of Patent: November 5, 2024Assignee: JAPAN SCIENCE AND TECHNOLOGY AGENCYInventor: Myeong-heom Park
-
Patent number: 12135284Abstract: An image processing method includes acquiring an image obtained by imaging of an object, ambient light data during the imaging, and reflection characteristic data and transmission characteristic data which depend on a concentration of a material contained in the object, and separating a reflected light component and a transmitting light component in the image using the ambient light data, the reflection characteristic data, and the transmission characteristic data.Type: GrantFiled: November 22, 2021Date of Patent: November 5, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Hironobu Koga
-
Patent number: 12131512Abstract: A product positioning method includes collecting a product image of a product, dividing the product image into a plurality of rectangular regions, performing integral image calculation on each rectangular region to obtain a plurality of integral images, forming n integral image regions each having four said integral images with two adjacent sides of each of the four said integral images connecting with one side of other two images of the four said integral images, numbering the four said integral images clockwise or counterclockwise, performing differential calculation on each integral image region to obtain a differential value according to the four said integral images, obtaining coordinates of a vertex of the product according to the differential values, and performing position correction on the product image according to obtained coordinates of the vertex and coordinates of a target vertex.Type: GrantFiled: November 18, 2021Date of Patent: October 29, 2024Inventors: Yixian Du, Gang Wang, De Chen, Jinjin Shi
-
Patent number: 12131518Abstract: A method and a device associate an object detection in a first frame with an object detection in a second frame using a convolutional neural (CNN) network trained to determine feature vectors such that object detections relating to separate objects are arranged in separate clusters. The CNN determines a reference set of feature vectors associated with the object detection in the first frame, and candidate sets of feature vectors associated with a respective one of identified areas corresponding to object detections in the second frame. A set of closest feature vectors is determined, and then measure of closeness to the reference set of feature vectors is determined for each candidate. A respective weight is determined for each object detection in the second frame. The object detection in the first frame is associated with one of the object detections in the second frame based on the assigned weights.Type: GrantFiled: December 1, 2021Date of Patent: October 29, 2024Assignee: AXIS ABInventors: Niclas Danielsson, Haochen Liu
-
Patent number: 12131461Abstract: In accordance with some embodiments of the disclosed subject matter, systems, methods, and media for automatically transforming a digital image into a simulated pathology image are provided. In some embodiments, the method comprises: receiving a content image from an endomicroscopy device; receiving, from a hidden layer of a convolutional neural network (CNN) trained to recognize a multitude of classes of common objects, features indicative of content of the content image; receiving, providing a style reference image to the CNN; receiving, from another hidden layer of the CNN, features indicative of a style of the style reference image; receiving, from the hidden layers of the CNN, features indicative of content and style of a target image; generating a loss value based on the features of the content image, the style reference image, and the target image; minimizing the loss value; and displaying the target image with the minimized loss.Type: GrantFiled: January 28, 2020Date of Patent: October 29, 2024Assignees: DIGNITY HEALTH, ARIZONA BOARD OF REGENTS ON BEHALF OF ARIZONA STATE UNIVERSITYInventors: Mohammadhassan Izadyyazdanabadi, Mark C. Preul, Evgenii Belykh, Yezhou Yang
-
Patent number: 12125294Abstract: A vehicle position determination device mountable to a vehicle, the vehicle position determination device including an acquisition unit that acquires a surrounding image that is road information for identifying a position of the vehicle and is represented by a small displacement with respect to the vehicle, and a control unit that compares the road information with road characteristic information indicating an absolute position of a predetermined point and determines a vehicle position according to a result of the comparison; the road information includes at least one of road shape information indicating a shape of a road surface in a direction of travel of the vehicle and road pattern information indicating a pattern on a road surface.Type: GrantFiled: September 30, 2021Date of Patent: October 22, 2024Assignee: DENSO CORPORATIONInventors: Takahisa Yokoyama, Noriyuki Ido
-
Patent number: 12118741Abstract: The present invention provides a processing apparatus (20) including a first generation unit (22) that generates, from a plurality of time-series images, three-dimensional feature information indicating a time change of a feature in each position in each of the plurality of images, a second generation unit (23) that generates person position information indicating a position in which a person is present in each of the plurality of images, and an estimation unit (24) that estimates person behavior indicated by the plurality of images, based on the time change of the feature indicated by the three-dimensional feature information in the position in which the person is present being indicated by the person position information.Type: GrantFiled: June 13, 2019Date of Patent: October 15, 2024Assignee: NEC CORPORATIONInventors: Jianquan Liu, Junnan Li