Patents Examined by Edward F. Urban
-
Patent number: 11972638Abstract: This application provides a face living body detection method performed by a computing device, the method including: obtaining a first face image of a target detection object in a first illumination condition and a second face image of the target detection object in a second illumination condition, determining a difference image according to the two images, decoupling an object reflectivity and an object normal vector corresponding to the target detection object from a feature map extracted from the difference image, and determining whether the target detection object is a living body according to the object reflectivity and the object normal vector. This method decouples texture information and depth information of a face, and performs living body detection by using decoupled information, which increases the defense capability against 3D attacks, thereby effectively defending against planar attacks and 3D attacks.Type: GrantFiled: October 28, 2021Date of Patent: April 30, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Jian Zhang, Jia Meng, Taiping Yao, Ying Tai, Shouhong Ding, Jilin Li
-
Patent number: 11971363Abstract: A foreign matter inspection system that is less likely to misidentify a crack as a foreign matter during an image inspection in which a vibration is applied to a powder contained in a bag-like container. The foreign matter inspection system includes a vibration device configured to apply a vibration to a container, a photography device configured to optically photograph the inside of the container through a transparent area, and a determination device configured to determine whether a foreign matter is present inside the container based on an image of the container photographed by the photography device. The vibration device alternately applies weak vibrations W and strong vibrations S to the container. The determination device determines whether the foreign matter is present inside the container based on the image of the container photographed by the photography device when the vibration device applies the weak vibrations W.Type: GrantFiled: October 25, 2019Date of Patent: April 30, 2024Assignee: NIPRO CORPORATIONInventors: Takuma Satou, Tadashi Kabutomori, Masamichi Takasugi
-
Patent number: 11967121Abstract: A difference detection device includes a difference detection unit configured to, based on association among a first image and a second image captured at different times and illustrating a substantially identical space and encoding information of each of the first image and the second image, detect difference between a third image and a fourth image captured at different times and illustrating a substantially identical space, and the encoding information is information acquired from data including the first image encoded and data including the second image encoded, before inverse transform processing is executed in decoding processing executed on each of the first image and the second image.Type: GrantFiled: October 3, 2019Date of Patent: April 23, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Motohiro Takagi, Kazuya Hayase, Atsushi Shimizu
-
Patent number: 11961165Abstract: An image acquisition unit acquires a plurality of projection images corresponding to a plurality of radiation source positions in a case of tomosynthesis imaging. A reconstruction unit reconstructs all or a part of the plurality of projection images to generate a tomographic image on each of a plurality of tomographic planes of a subject. A feature point detecting unit detects at least one feature point from a plurality of the tomographic images. A positional shift amount derivation unit derives a positional shift amount between the plurality of projection images with the feature point as a reference on a corresponding tomographic plane corresponding to the tomographic image in which the feature point is detected. The reconstruction unit reconstructs the plurality of projection images by correcting the positional shift amount to generate a corrected tomographic image.Type: GrantFiled: February 8, 2021Date of Patent: April 16, 2024Assignee: FUJIFILM CORPORATIONInventor: Junya Morita
-
Patent number: 11961328Abstract: An eye detecting device is configured to: acquire a color image including a face of a person taken by an image taking device; generate a grayscale image by multiplying each of a red component value, a green component value, and a blue component value of each pixel of the color image by a predetermined ratio according to characteristics of a lens of glasses that the person wears; detect an eye of the person from the grayscale image; and output eye information on the eye of the person.Type: GrantFiled: May 31, 2022Date of Patent: April 16, 2024Assignees: SWALLOW INCUBATE CO., LTD., PANASONIC HOLDINGS CORPORATIONInventor: Toshikazu Ohno
-
Patent number: 11961224Abstract: The system for the qualitative evaluation of human organs includes: a camera (101) for capturing an image of the organ, the organ being in the donor's body or already collected or placed in a hypothermic, normothermic and/or subnormothermic graft infusion machine at the time of image capture; an image processor (103, 104) configured to extract at least a portion of the organ image from the captured image and an estimator (103, 104) for estimating, from the extracted image, the health of the organ. In some embodiments, the device also includes a means of introducing into the donor's body at least one optical window of the image capture means as well as a light source to illuminate the donor's organ, while maintaining the sterility of the surgical field. In some embodiments, the image processor involves applying a clipping mask to the captured image.Type: GrantFiled: May 22, 2019Date of Patent: April 16, 2024Inventor: Clément Labiche
-
Patent number: 11960787Abstract: A vehicle and control method of the vehicle are provided. The vehicle includes a camera provided on the vehicle and configured to capture an image of an object outside the vehicle, a controller configured to determine a photographing position required for facial recognition from the captured image, a guide configured to guide the photographing position, and a display configured to display a result of the facial recognition.Type: GrantFiled: June 22, 2021Date of Patent: April 16, 2024Assignees: HYUNDAI MOTOR COMPANY, KIA CORPORATIONInventors: Yun Sup Ann, Hyunsang Kim
-
Patent number: 11961621Abstract: A method includes receiving patient health data; determining a score using a trained machine learning model; determining a threshold value using an adaptive threshold tuning learning model; comparing the score to the threshold value; and generating an alarm. A computing system includes a processor; and a memory having stored thereon instructions that, when executed by the processor, cause the computing system to: receive patient health data; determine a score using a trained machine learning model; determine a threshold value using an adaptive threshold tuning learning model; compare the score to the threshold value; and generate an alarm. A non-transitory computer readable medium includes program instructions that when executed, cause a computer to: receive patient health data; determine a score using a trained machine learning model; determine a threshold value using an adaptive threshold tuning learning model; compare the score to the threshold value; and generate an alarm.Type: GrantFiled: February 10, 2023Date of Patent: April 16, 2024Assignee: REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Christopher Elliot Gillies, Daniel Francis Taylor, Kevin R. Ward, Fadi Islim, Richard Medlin
-
Patent number: 11954824Abstract: An image pre-processing method and an image processing apparatus for a fundoscopic image are provided. A region of interest (ROI) is obtained from a fundoscopic image to generate a first image. The ROI is focused on an eyeball in the fundoscopic image. A smoothing process is performed on the first image to generate a second image. A value difference between neighboring pixels in the second image is increased to generate a third image.Type: GrantFiled: April 21, 2021Date of Patent: April 9, 2024Assignee: Acer Medical Inc.Inventors: Yi-Jin Huang, Chin-Han Tsai, Ming-Ke Chen
-
Patent number: 11947668Abstract: In some embodiments, an apparatus includes a memory and a processor. The processor is configured to extract a set of features from a potentially malicious file and provide the set of features as an input to a normalization layer of a neural network. The processor is configured to implement the normalization layer by calculating a set of parameters associated with the set of features and normalizing the set of features based on the set of parameters to define a set of normalized features. The processor is further configured to provide the set of normalized features and the set of parameters as inputs to an activation layer of the neural network such that the activation layer produces an output based on the set of normalized features and the set of parameters. The output can be used to produce a maliciousness classification of the potentially malicious file.Type: GrantFiled: October 12, 2018Date of Patent: April 2, 2024Assignee: Sophos LimitedInventor: Richard Harang
-
Patent number: 11948356Abstract: A context-based object classifying model is applied to a set of object location representations (12, 14), derived from an object detection applied to a frame (10) of a video stream, to obtain a context-adapted classification probability for each object location representation (12, 14). Each object location representation (12, 14) defines a region of the frame (10) and each context-adapted classification probability represents a likelihood that the region comprises an object (11, 13). The model is generated based on object location representations from previous frames of the video stream. It is determined whether the region defined by the object location representation (12, 14) comprises an object (11, 13) based on the context-adapted classification probability and a detection probability. The detection probability is derived from the object detection and represents a likelihood that the region defined by the object location representation (12, 14) comprises an object (11, 13).Type: GrantFiled: November 21, 2018Date of Patent: April 2, 2024Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Volodya Grancharov, Arvind Thimmakkondu Hariraman
-
Patent number: 11941898Abstract: A three-dimensional position and posture recognition device speeds estimation of a position posture and a gripping coordinate posture of a gripping target product. The device includes: a sensor unit configured to measure a distance between an image of an object and the object; and a processing unit configured to calculate an object type included in the image, read model data of each object from the external memory, and create structured model data having a resolution set for each object from the model data, generate measurement point cloud data of a plurality of resolutions from information on a distance between an image of the object and the object, perform a K neighborhood point search using the structured model data and the measurement point cloud data, and perform three-dimensional position recognition processing of the object by rotation and translation estimation regarding a point obtained from the K neighborhood point search.Type: GrantFiled: December 19, 2019Date of Patent: March 26, 2024Assignee: HITACHI, LTD.Inventors: Atsutake Kosuge, Takashi Oshima, Yukinori Akamine, Keisuke Yamamoto
-
Patent number: 11941911Abstract: The present teaching relates to method, system, medium, and implementations for detecting liveness. When an image is received with visual information claimed to represent a palm of a person, a region of interests (ROI) in the image that corresponds to the palm is identified. Each of a plurality of fake palm detectors individually generates an individual decision on whether the ROI corresponds to a specific type of fake palm that the fake palm detector is to detect. Such individual decisions from the plurality of fake palm detectors are combined to derive a liveness detection decision with respect to the ROI.Type: GrantFiled: January 28, 2022Date of Patent: March 26, 2024Assignee: ARMATURA LLCInventors: Zhinong Li, Xiaowu Zhang
-
Patent number: 11941805Abstract: The present disclosure relates to systems and methods for image processing. The methods may include obtaining imaging data of a subject, generating a first image based on the imaging data, and generating at least two intermediate images based on the first image. At least one of the at least two intermediate images may be generated based on a machine learning model. And the at least two intermediate images may include a first intermediate image and a second intermediate image. The first intermediate image may include feature information of the first image, and the second intermediate image may have lower noise than the first image. The methods may further include generating, based on the first intermediate image and at least one of the first image or the second intermediate image, a target image of the subject.Type: GrantFiled: July 17, 2021Date of Patent: March 26, 2024Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventor: Yang Lyu
-
Patent number: 11941080Abstract: A system and method for learning human activities from video demonstrations using video augmentation is disclosed. The method includes receiving original videos from one or more data sources. The method includes processing the received original videos using one or more video augmentation techniques to generate a set of augmented videos. Further, the method includes generating a set of training videos by combining the received original videos with the generated set of augmented videos. Also, the method includes generating a deep learning model for the received original videos based on the generated set of training videos. Further, the method includes learning the one or more human activities performed in the received original videos by deploying the generated deep learning model. The method includes outputting the learnt one or more human activities performed in the original videos.Type: GrantFiled: May 20, 2021Date of Patent: March 26, 2024Assignee: Retrocausal, Inc.Inventors: Quoc-Huy Tran, Muhammad Zeeshan Zia, Andrey Konin, Sanjay Haresh, Sateesh Kumar
-
Patent number: 11941783Abstract: Methods and systems include receiving, at de-ringing circuitry of a display pipeline of an electronic device, scaled image content based on image data. The de-ringing circuitry also receives a fallback scaler output. The de-ringing circuitry determines whether the image data has a change frequency greater than a threshold. In response to the change frequency being above the threshold, the de-ringing circuitry determines a weight. Based at least in part on the weight, the de-ringing circuitry blends the scaled image content and the fallback scaler output based at least in part on the weight.Type: GrantFiled: January 14, 2021Date of Patent: March 26, 2024Assignee: Apple Inc.Inventors: Mahesh B. Chappalli, Vincent Z. Young
-
Patent number: 11935242Abstract: The present disclosure provides for crop yield estimation by identifying, via image processing, a field in which a crop is grown; identifying a plurality of regions within the field; identifying, by processing growth metrics via a model, a plurality of data collection points in the plurality of regions, wherein a given data collection point of the plurality of data collection points within a given region of the plurality of regions is identified by multivariate analysis as representative of growing conditions in the given region; receiving in-field data linked to the data collection points of the plurality; and predicting a yield for the crop in the field based on the in-field data.Type: GrantFiled: March 9, 2021Date of Patent: March 19, 2024Assignee: International Business Machines CorporationInventors: Bruno Silva, Renato Luiz De Freitas Cunha, Ana Paula Appel, Eduardo Rocha Rodrigues
-
Patent number: 11935254Abstract: System, methods, and other embodiments described herein relate to improving depth prediction for objects within a low-light image using a style model. In one embodiment, a method includes encoding, by a style model, an input image to identify content information. The method also includes decoding, by the style model, the content information into an albedo component and a shading component. The method also includes generating, by the style model, a synthetic image using the albedo component and the shading component. The method also includes providing the synthetic image to a depth model.Type: GrantFiled: June 9, 2021Date of Patent: March 19, 2024Assignee: Toyota Motor Engineering & Manufacturing North America, Inc.Inventors: Rui Guo, Xuewei Qi, Kentaro Oguchi, Kareem Metwaly
-
Patent number: 11936843Abstract: Techniques are described for converting a 2D map into a 3D mesh. The 2D map of the environment is generated using data captured by a 2D scanner. Further, a set of features is identified from a subset of panoramic images of the environment that are captured by a camera. Further, the panoramic images from the subset are aligned with the 2D map using the features that are extracted. Further, 3D coordinates of the features are determined using 2D coordinates from the 2D map and a third coordinate based on a pose of the camera. The 3D mesh is generated using the 3D coordinates of the features.Type: GrantFiled: May 20, 2021Date of Patent: March 19, 2024Assignee: FARO Technologies, Inc.Inventors: Mark Brenner, Aleksej Frank, Ahmad Ramadneh, Mufassar Waheed, Oliver Zweigle
-
Patent number: 11928799Abstract: An electronic device includes a plurality of cameras, and at least one processor connected to the plurality of cameras. The at least one processor is configured to, based on a first user command to obtain a live view image, segment an image frame obtained via a camera among the plurality of cameras into a plurality of regions based on a brightness of pixels and an object included in the image frame; obtain a plurality of camera parameter setting value sets, each including a plurality of parameter values with respect to the plurality of regions; based on a second user command to capture the live view image, obtain a plurality of image frames using the plurality of camera parameter setting value sets and at least one camera among the plurality of cameras; and obtain an image frame by merging the plurality of obtained image frames.Type: GrantFiled: June 4, 2021Date of Patent: March 12, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Ashish Chopra, Bapi Reddy Karri