Patents Examined by Vaisali Rao Koppolu
-
Patent number: 12175650Abstract: A method includes obtaining an image data set that depicts semiconductor components, and applying a hierarchical bricking to the image data set. In this case, the bricking includes a plurality of bricks on a plurality of hierarchical levels. The bricks on different hierarchical levels have different image element sizes of corresponding image elements.Type: GrantFiled: July 29, 2021Date of Patent: December 24, 2024Assignee: Carl Zeiss SMT GmbHInventors: Jens Timo Neumann, Abhilash Srikantha, Christian Wojek, Thomas Korb
-
Patent number: 12175771Abstract: A control system for a vehicle includes a camera mounted on the vehicle and configured to take an image of an occupant of the vehicle, and an anti-droplet protective equipment providing device mounted on the vehicle and configured to provide anti-droplet protective equipment to the occupant, a determination unit configured to determine whether the occupant is wearing the anti-droplet protective equipment based on the image of the occupant taken by the camera, and a provision control unit configured to provide the anti-droplet protective equipment to the occupant with the anti-droplet protective equipment providing device when the determination unit determines that the occupant is not wearing the anti-droplet protective equipment.Type: GrantFiled: March 24, 2022Date of Patent: December 24, 2024Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Ryota Tomizawa, Shozo Takaba, Ayako Shimizu, Hojung Jung, Daisuke Sato, Yasuhiro Kobatake
-
Patent number: 12169916Abstract: An inpainting method includes obtaining an image including an object having a delicate shape and identifying a target region within the image, where the target region is adjacent to the object. The method also includes using a first mask to separate the image into a number of semantic categories and aggregating neighboring contexts for the target region based on the semantic categories. The method further includes restoring, based on the aggregated contexts, textures in the target region without affecting the delicate shape of the object. In addition, the method includes displaying a refined image including the restored textures in the target region and the object.Type: GrantFiled: October 15, 2021Date of Patent: December 17, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Wenbo Li, Hongxia Jin
-
Patent number: 12169975Abstract: This specification relates to reconstructing three-dimensional (3D) scenes from two-dimensional (2D) images using a neural network. According to a first aspect of this specification, there is described a method for creating a three-dimensional reconstruction of a scene with multiple objects from a single two-dimensional image, the method comprising: receiving a single two-dimensional image; identifying all objects in the image to be reconstructed and identifying the type of said objects; estimating a three-dimensional representation of each identified object; estimating a three-dimensional plane physically supporting all three-dimensional objects; and positioning all three-dimensional objects in space relative to the supporting plane.Type: GrantFiled: June 17, 2020Date of Patent: December 17, 2024Assignee: SNAP INC.Inventors: Riza Alp Guler, Georgios Papandreou, Iason Kokkinos
-
Joint minimum entropy method for simultaneous processing and fusion of multi-physics data and images
Patent number: 12165291Abstract: A method for the simultaneous imaging of different physical properties of an examined medium from multi-physics datasets and for digital enhancement and restoration of multiple multidimensional digital images is described. The method introduces nonnegative joint entropy determined as a joint weighted average of the logarithm of the corresponding density of the model parameters and/or images and/or their attributes. The joint entropy measures are introduced as additional constraints, and their minimization results in enforcing of the order and consistency between the different model parameters and/or multiple images and/or their transforms. The method does not require a priori knowledge about specific physical, analytical, empirical or statistical relationships between the different model parameters and/or multiple images and their attributes, nor does the method require a priori knowledge about specific geometric or structural relationships between different model parameters, images, and/or their attributes.Type: GrantFiled: June 9, 2021Date of Patent: December 10, 2024Assignee: TECHNOIMAGING, LLCInventor: Michael S. Zhdanov -
Patent number: 12159378Abstract: Disclosed is a high-contrast minimum variance imaging method based on deep learning. For the problem of the poor performance of a traditional minimum variance imaging method in terms of ultrasonic image contrast, a deep neural network is applied in order to suppress an off-axis scattering signal in channel data received by an ultrasonic transducer, and after the deep neural network is combined with a minimum variance beamforming method, an ultrasonic image with a higher contrast can be obtained while the resolution performance of the minimum variance imaging method is maintained. In the present method, compared with the traditional minimum variance imaging method, after an apodization weight is calculated, channel data is first processed by using a deep neural network, and weighted stacking of the channel data is then carried out, so that the pixel value of a target imaging point is obtained, thereby forming a complete ultrasonic image.Type: GrantFiled: October 25, 2019Date of Patent: December 3, 2024Assignee: SOUTH CHINA UNIVERSITY OF TECHNOLOGYInventors: Junying Chen, Renxin Zhuang
-
Patent number: 12158925Abstract: A method for an online mapping system includes localizing a location of an ego vehicle relative to an offline feature map. The method also includes querying surrounding features of the ego vehicle based on the offline feature map. The method further includes generating a probabilistic map regarding the surrounding features of the ego vehicle queried from the offline feature map.Type: GrantFiled: December 10, 2020Date of Patent: December 3, 2024Inventors: Shunsho Kaku, Jeffrey Michael Walls, Ryan Wolcott
-
Patent number: 12136271Abstract: Aspects of the disclosure relate to controlling a vehicle. For instance, using a camera, a first camera image including a first object may be captured. A first bounding box for the first object and a distance to the first object may be identified. A second camera image including a second object may be captured. A second bounding box for the second image and a distance to the second object may be identified. Whether the first object is the second object may be determined using a plurality of models to compare visual similarity of the two bounding boxes, to compare a three-dimensional location based on the distance to the first object and a three-dimensional location based on the distance to the second object, and to compare results from the first and second models. The vehicle may be controlled in an autonomous driving mode based on a result of the third model.Type: GrantFiled: May 13, 2021Date of Patent: November 5, 2024Assignee: Waymo LLCInventors: Ruichi Yu, Kang Li, Tao Han, Robert Cosgriff, Henrik Kretzschmar
-
Patent number: 12136200Abstract: To replace text in a digital video image sequence, a system will process frames of the sequence to: define a region of interest (ROI) with original text in each of the frames; use the ROIs to select a reference frame from the sequence; select a target frame from the sequence; determine a transform function between the ROI of the reference frame and the ROI of the target frame; replace the original text in the ROI of the reference frame with replacement text to yield a modified reference frame ROI; and use the transform function to transform the modified reference frame ROI to a modified target frame ROI in which the original text is replaced with the replacement text. The system will then insert the modified target frame ROI into the target frame to produce a modified target frame. This process may repeat for other target frames of the sequence.Type: GrantFiled: June 30, 2021Date of Patent: November 5, 2024Assignee: CareAR Holdings LLCInventors: Vijay Kumar Baikampady Gopalkrishna, Raja Bala
-
Patent number: 12130237Abstract: A method for calibrating a camera of a mobile device for detecting an analyte in a sample. An image of an object is captured using the camera with an illumination source turned on. A first area is determined in the image which is affected by direct reflection of light originating from the illumination source and reflected by the object. A second area which does not substantially overlap with the first area is determined as a target area of a test strip. Also disclosed is a detection in which a sample is applied to a test strip and a visual indication is provided to position the test strip relative to the camera to thereby locate the test field of the strip in the target area. An image of the test field is captured using the camera while the illumination source is turned on, and analyte concentration is determined from the image.Type: GrantFiled: December 10, 2020Date of Patent: October 29, 2024Assignee: Roche Diabetes Care, Inc.Inventors: Timo Klein, Max Berg
-
Patent number: 12106606Abstract: The embodiments of the present disclosure disclose a method for determining the direction of gaze. A specific implementation of the method includes: obtaining a face or eye image of a target subject, and establishing a feature extraction network; using an adversarial training method to optimize the feature extraction network, and implicitly removing the gaze-irrelevant features extracted by the feature extraction network, so that the feature extraction network extracts gaze-related features from the face or eye image to obtain the gaze-related features; determining the target gaze direction based on the gaze-related features. This implementation can separate the gaze-irrelevant features contained in the image features from the gaze-related features, so that the image features contain the gaze-related features, and that the accuracy and stability of the determined direction of gaze are further improved.Type: GrantFiled: December 24, 2021Date of Patent: October 1, 2024Assignee: Beihang UniversityInventors: Feng Lu, Yihua Cheng
-
Patent number: 12094137Abstract: An apparatus comprises an interface and a processor. The interface may be configured to receive pixel data.Type: GrantFiled: January 5, 2022Date of Patent: September 17, 2024Assignee: Ambarella International LPInventors: Sicong Lyu, Liangliang Wang, Bo Yu
-
Patent number: 12086965Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately restoring missing pixels within a hole region of a target image utilizing multi-image inpainting techniques based on incorporating geometric depth information. For example, in various implementations, the disclosed systems utilize a depth prediction of a source image as well as camera relative pose parameters. Additionally, in some implementations, the disclosed systems jointly optimize the depth rescaling and camera pose parameters before generating the reprojected image to further increase the accuracy of the reprojected image. Further, in various implementations, the disclosed systems utilize the reprojected image in connection with a content-aware fill model to generate a refined composite image that includes the target image having a hole, where the hole is filled in based on the reprojected image of the source image.Type: GrantFiled: November 5, 2021Date of Patent: September 10, 2024Assignee: Adobe Inc.Inventors: Yunhan Zhao, Connelly Barnes, Yuqian Zhou, Sohrab Amirghodsi, Elya Shechtman
-
Patent number: 12079967Abstract: An image processing method and system, and a computer readable storage medium. The method comprises: obtaining an image to be processed (S101); performing grayscale processing on the image to be processed to obtain a grayscale image, and performing blurring processing on the image to be processed to obtain a first blurred image (S102); performing binarization processing on the image to be processed according to the grayscale image and the first blurred image to obtain a binarized image (S103); performing expansion processing on grayscale values of high-value pixel points in the binarized image to obtain an expanded image (S104); performing sharpening processing on the expanded image to obtain a sharp image (S105); adjusting the contrast of the sharp image to obtain a contrast image (S106); and using the grayscale image as a guided image to perform guided filter processing on the contrast image to obtain a target image (S107).Type: GrantFiled: April 25, 2021Date of Patent: September 3, 2024Assignee: Hangzhou Glority Software LimitedInventors: Qingsong Xu, Qing Li
-
Patent number: 12062164Abstract: A method of manufacturing a semiconductor device that includes providing a substrate having a pattern formed thereon. A scanning electron microscope (SEM) image is generated that includes a boundary image showing an edge of the pattern. A blended image is generated by performing at least one blending operation on the SEM image and a background image aligned with the boundary image. Contour data is generated by binarizing the blended image on a basis of a threshold value. The threshold value is determined by a critical dimension of the pattern.Type: GrantFiled: December 10, 2021Date of Patent: August 13, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Min-Cheol Kang, Dong Hoon Kuk
-
Patent number: 12062189Abstract: The embodiments of the disclosure provide an object tracking method and an object tracking device. The method includes: in a tracking state, controlling an image-capturing device to capture a first image of a specific object, wherein a plurality of light emitting elements are disposed on the specific object, and at least one first light emitting element of the light emitting elements is on and captured in the first image; determining a first object pose of the specific object based on the at least one first light emitting element of the light emitting elements in the first image; obtaining at least one second light emitting element of the light emitting elements based on the first object pose, wherein the at least one second light emitting element is estimated to be uncapturable by the image-capturing device; and turning off the at least one second light emitting element.Type: GrantFiled: August 16, 2021Date of Patent: August 13, 2024Assignee: HTC CorporationInventors: Yuan-Tung Chen, Jyun-Jhong Lin
-
Patent number: 11971956Abstract: Techniques for training a visual neural network are described. In particular, the training of the visual neural network may be performed using one or more contrastive learning loss functions including one or more of a visual-to-visual contrastive learning function using the visual neural network on positive and negative video clips according to a first loss function, a secondary-to-secondary contrastive learning a secondary neural network on secondary positive and negative information generated from the positive and negative video clips, and a secondary-to-visual contrastive learning according to a third loss function using the visual neural network on positive and negative video clips and using the secondary neural network secondary positive and negative information generated from the positive and negative video clips.Type: GrantFiled: August 3, 2021Date of Patent: April 30, 2024Assignee: Amazon Technologies, Inc.Inventors: Fanyi Xiao, Joseph P. Tighe, Davide Modolo
-
Patent number: 11960027Abstract: A LIDAR data based object recognition apparatus merges segments over-segmented in a LIDAR data based segmentation process. The apparatus includes a segment generator generating a plurality of segments by grouping points acquired from a LIDAR sensor. A target segment selector selects a target segment that is a base for merging from the plurality of segments and a segment merging determination unit checks whether segments other than the target segment are mergeable segments and determines whether to merge the target segment and the mergeable segments based on attribute information of the target segment and the mergeable segments. A segment merger merges the target segment and the mergeable segments and outputs a merged segment.Type: GrantFiled: October 19, 2020Date of Patent: April 16, 2024Assignees: Hyundai Motor Company, Kia Motors CorporationInventor: Mu Gwan Jeong
-
Patent number: 11941862Abstract: An apparatus acquires a low-noise image, which is training data for training an image processing model, and a plurality of high-noise images corresponding to the same scene in the low-noise image data and each having different noise patterns. The apparatus calculates each of errors between a plurality of estimated output values, which is acquired by inputting a different one of the plurality of high-noise images to the image processing model and the low-noise image. Then, the apparatus calculates an anti-noise stability based on an error between the plurality of estimated output values, and trains the image processing model using a loss function including the errors between the plurality of estimated output values and the low-noise image and the anti-noise stability.Type: GrantFiled: March 23, 2021Date of Patent: March 26, 2024Assignee: CANON KABUSHIKI KAISHAInventor: Yasuhiro Okuno
-
Patent number: 11915336Abstract: The present disclosure provides a method of embedding a watermark on a Joint Photographic Experts Group (JPEG) image. The method includes: performing an entropy decoding on the JPEG image to generate quantized discrete cosine transform (DCT) coefficients; determining target bits in a bit plane of the quantized DCT coefficients on the basis of a watermark-embedding table (WET); and embedding a watermark based on metadata of the JPEG image in the target bits. Also, the present disclosure provides a method of verifying integrity of the image by using the embedded watermark.Type: GrantFiled: May 17, 2019Date of Patent: February 27, 2024Assignee: INDUSTRY-ACADEMIA COOPERATION GROUP OF SEJONG UNIVERSITYInventor: Oh Jin Kwon