Patents Examined by Geoffrey E Summers
-
Patent number: 11538166Abstract: A semantic segmentation architecture comprising an asymmetric encoder-decoder structure, wherein the architecture comprises further an adapter for linking different stages of the encoder and the decoder. The adapter amalgamates information from both the encoder and the decoder for preserving and refining information between multiple levels of the encoder and decoder. In this way the adapter aggregates features from different levels and intermediates between encoder and decoder.Type: GrantFiled: November 30, 2020Date of Patent: December 27, 2022Assignee: NAVINFO EUROPE B.V.Inventors: Elahe Arani, Shabbir Marzban, Andrei Pata, Bahram Zonooz
-
Patent number: 11528461Abstract: A method and an apparatus for generating a virtual viewpoint image by obtaining at least one input viewpoint image and warping pixels of the at least one input viewpoint image to a virtual viewpoint image coordinate system; mapping a patch to a first pixel of a plurality of pixels warped to the virtual viewpoint image coordinate system when a difference between a first depth value of the first pixel and a second depth value of a second pixel adjacent to the first pixel is less than or equal to a predetermined threshold and mapping no patch to the first pixel when the difference is greater than the predetermined threshold; and generating the virtual viewpoint image by blending the plurality of pixels and/or the patch are provided.Type: GrantFiled: October 31, 2019Date of Patent: December 13, 2022Assignee: Electronics and Telecommunications Research InstituteInventors: Sangwoon Kwak, Joungil Yun
-
Patent number: 11527005Abstract: A method of depth detection based on a plurality of video frames includes receiving a plurality of input frames including a first input frame, a second input frame, and a third input frame respectively corresponding to different capture times, convolving the first to third input frames to generate a first feature map, a second feature map, and a third feature map corresponding to the different capture times, calculating a temporal attention map based on the first to third feature maps, the temporal attention map including a plurality of weights corresponding to different pairs of feature maps from among the first to third feature maps, each weight of the plurality of weights indicating a similarity level of a corresponding pair of feature maps, and applying the temporal attention map to the first to third feature maps to generate a feature map with temporal attention.Type: GrantFiled: April 6, 2020Date of Patent: December 13, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Haoyu Ren, Mostafa El-Khamy, Jungwon Lee
-
Patent number: 11527330Abstract: An acquisition unit acquires a classification result obtained by classifying training assistants by performing a cluster analysis based on first rehabilitation data about rehabilitation performed by the trainee by using the rehabilitation support system, the first rehabilitation data including at least assistant data indicating a training assistant and index data indicating a degree of recovery of the trainee. A learning unit generates a learning model, the learning model being configured to input second rehabilitation data including at least the action data indicating an assisting action performed by the training assistant to assist the trainee and output the action data for suggesting a next action to be performed by the training assistant. The learning unit generates the learning model by using, as teacher data, the second rehabilitation data for which pre-processing has been performed based on the classification result.Type: GrantFiled: June 12, 2020Date of Patent: December 13, 2022Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Nobuhisa Otsuki, Issei Nakashima, Yoshie Nakanishi, Manabu Yamamoto, Natsuki Yamakami
-
Patent number: 11527014Abstract: An illustrative scene capture system determines a set of two-dimensional (2D) feature pairs each representing a respective correspondence between particular features depicted in both a first intensity image from a first vantage point and a second intensity image from a second vantage point. Based on the set of 2D feature pairs, the system determines a set of candidate three-dimensional (3D) feature pairs for a first depth image from the first vantage point and a second depth image from the second vantage point. The system selects a subset of selected 3D feature pairs from the set of candidate 3D feature pairs in a manner configured to minimize an error associated with a transformation between the first depth image and the second depth image. Based on the subset of selected 3D feature pairs, the system manages calibration parameters for surface data capture devices that captured the intensity and depth images.Type: GrantFiled: November 24, 2020Date of Patent: December 13, 2022Assignee: Verizon Patent and Licensing Inc.Inventors: Elena Dotsenko, Liang Luo, Tom Hsi Hao Shang, Vidhya Seran
-
Patent number: 11521411Abstract: A system and method for providing multi-camera 3D body part labeling and performance metrics includes receiving 2D image data and 3D depth data from a plurality image capture units (ICUs) each indicative of a scene viewed by the ICUs, the scene having at least one person, each ICU viewing the person from a different viewing position, determining 3D location data and visibility confidence level for the body parts from each ICU, using the 2D image data and the 3D depth data from each ICU, transforming the 3D location data for the body parts from each ICU to a common reference frame for body parts having at least a predetermined visibility confidence level, averaging the transformed, visible 3D body part locations from each ICU, and determining a performance metric of at least one of the body parts using the averaged 3D body part locations. The person may be a player in a sports scene.Type: GrantFiled: October 22, 2020Date of Patent: December 6, 2022Assignee: DISNEY ENTERPRISES, INC.Inventors: Jayadas Devassy, Peter Walsh
-
Patent number: 11514608Abstract: Provided are a fisheye camera calibration system, method and an electronic device. The system includes a hemispherical target, a fisheye camera and an electronic device. The hemispherical target includes a hemispherical inner surface and multiple markers provided on the hemispherical inner surface. The fisheye camera is used for photographing the hemispherical target and acquiring a target image, where the hemispherical target and the multiple markers provided on the hemispherical inner surface are captured in the target image. The electronic device is used for acquiring initial values of k1, k2, k3, k4, k5, u0, v0, mu and mv, and using a Levenberg-Marquardt algorithm to optimize the initial values of k1, k2, k3, k4, k5, u0, v0, mu and mv, so as to determine imaging model parameters of the fisheye camera.Type: GrantFiled: September 8, 2021Date of Patent: November 29, 2022Assignee: SICHUAN VISENSING TECHNOLOGY CO., LTD.Inventors: Xianyu Su, Jia Ai, Shuangyun Shao
-
Patent number: 11508034Abstract: Systems and methods for processing images receive an input image. The systems and methods provide the input image to a first module to increase a resolution of the input image to produce an upscaled image. The systems and methods detect white pixels in the input image. The systems and methods generate a mask associated with the input image. The mask includes mask bits that are set to mark the white pixels in the input image. The systems and methods upscale the mask to produce an upscaled mask matching a resolution of the upscaled image. The systems and methods identify target pixels of the upscaled image that correspond to the set mask bits in the upscaled mask. The systems and methods modify the upscaled image to produce an output image by replacing target pixels of the upscaled image with a replacement pixel having greater whiteness. The systems and methods output the output image.Type: GrantFiled: January 25, 2021Date of Patent: November 22, 2022Assignee: KYOCERA Document Solutions Inc.Inventors: Sheng Li, Dongpei Su
-
Patent number: 11466666Abstract: The present disclosure provides a method and device for stitching wind turbine blade images, and a storage medium. The method includes performing edge detection on a plurality of images of the blade of the wind turbine to determine a blade region for each of the plurality of images; and for each pair of images among the plurality of images of the blade of the wind turbine, which are captured successively, stitching a front end of a former one of the pair of images captured successively and a rear end of a latter one of the pair of images captured successively, wherein the front end is far away from a root of the blade of the wind turbine, and the rear end is close to the root of the blade of the wind turbine.Type: GrantFiled: November 21, 2019Date of Patent: October 11, 2022Assignee: SHANGHAI CLOBOTICS TECHNOLOGY CO., LTD.Inventors: Wenfeng Lin, Xun Liu, Yan Ke, George Christopher Yan
-
Patent number: 11449751Abstract: The present disclosure provides a training method for generative adversarial network, which includes: extracting a first-resolution sample image from a second-resolution sample image; separately providing a first input image and a second input image for a generative network to generate a first output image and a second output image respectively, the first input image including a first-resolution sample image and a first noise image, the second input image including the first-resolution sample image and a second noise image; separately providing the first output image and a second-resolution sample image for a discriminative network to output a first discrimination result and a second discrimination result; and adjusting parameters of the generative network to reduce a loss function. The present disclosure further provides an image processing method using the generative adversarial network, a computer device, and a computer-readable storage medium.Type: GrantFiled: September 25, 2019Date of Patent: September 20, 2022Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Hanwen Liu, Dan Zhu, Pablo Navarrete Michelini
-
Patent number: 11443439Abstract: An air-to-air background-oriented schlieren system in which reference frames are acquired concurrently with the image frames, recording a target aircraft from a sensor aircraft flying in formation, while concurrently recording reference frames of underlying terrain to provide a visually textured background as a reference. This auto-referencing method improves the original AirBOS method by allowing a much more flexible and useful measurement, reducing the flight planning and piloting burden, and broadening the possible camera choices to acquire imaging of visible density changes in air that cause a refractive index change by an airborne vehicle.Type: GrantFiled: March 16, 2020Date of Patent: September 13, 2022Assignee: U.S.A. as Represented by the Administrator of the National Aeronautics and Space AdministrationInventors: Daniel W Banks, James T Heineck
-
Patent number: 11430152Abstract: Various embodiments are directed to a Pose Correction Engine (“Engine”). The Engine generates a reference image of the object of interest. The reference image portrays the object of interest oriented according to a first pose. The Engine receives a source image of an instance of the object. The source image portrays the instance of the object oriented according to a variation of the first pose. The Engine determines a difference between the first pose of the reference image and the variation of the first pose of the source image. The Engine identifies, based on the determined difference, one or portions of a three-dimensional (3D) map of a shape of the object obscured by the variation of the first pose portrayed in the source image. The Engine generates a pose corrected image of the instance of the object that portrays at least a portion of the source image and at least the identified portion of the 3D map of the shape of the object.Type: GrantFiled: October 18, 2021Date of Patent: August 30, 2022Assignee: Entrupy Inc.Inventors: Hemanth Kumar Sangappa, Aman Jaiswal, Rohan Sheelvant, Ashlesh Sharma
-
Patent number: 11430138Abstract: Systems and methods for multi-frame video frame interpolation. Higher-order motion modeling, such as cubic motion modeling, achieves predictions of intermediate optical flow between multiple interpolated frames, assisted by relaxation of the constraints imposed by the loss function used in initial optical flow estimation. A temporal pyramidal optical flow refinement module performs coarse-to-fine refinement of the optical flow maps used to generate the intermediate frames, focusing a proportionally greater amount of refinement attention to the optical flow maps for the high-error middle frames. A temporal pyramidal pixel refinement module performs coarse-to-fine refinement of the generated intermediate frames, focusing a proportionally greater amount of refinement attention to the high-error middle frames.Type: GrantFiled: November 23, 2020Date of Patent: August 30, 2022Assignee: Huawei Technologies Co., Ltd.Inventors: Zhixiang Chi, Rasoul Mohammadi Nasiri, Zheng Liu, Jin Tang, Juwei Lu
-
Patent number: 11430150Abstract: A method and apparatus for processing sparse points. The method includes determining spatial hierarchical point data based on a key point set and a local point set of a sparse point set, determining relationship feature data by encoding a spatial hierarchical relationship between points of the spatial hierarchical point data, generating a global feature and a local feature of the sparse point set through a conversion operation associated with the relationship feature data, and generating a processing result for the sparse point set based on the global feature and the local feature.Type: GrantFiled: December 16, 2020Date of Patent: August 30, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Zhuo Li, Huiguang Yang, Yuguang Li, Liu Yang
-
Patent number: 11416746Abstract: The present disclosure provides a training method for generative adversarial network, which includes: extracting a first-resolution sample image from a second-resolution sample image; separately providing a first input image and a second input image for a generative network to generate a first output image and a second output image respectively, the first input image including a first-resolution sample image and a first noise image, the second input image including the first-resolution sample image and a second noise image; separately providing the first output image and a second-resolution sample image for a discriminative network to output a first discrimination result and a second discrimination result; and adjusting parameters of the generative network to reduce a loss function. The present disclosure further provides an image processing method using the generative adversarial network, a computer device, and a computer-readable storage medium.Type: GrantFiled: September 25, 2019Date of Patent: August 16, 2022Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Hanwen Liu, Dan Zhu, Pablo Navarrete Michelini
-
Patent number: 11416970Abstract: Techniques are disclosed for panoramic image construction based on images captured by rotating imagers. In one example, a method includes receiving a first sequence of images associated with a scene and captured during continuous rotation of an image sensor. Each image of the first sequence has a portion that overlaps with another image of the first sequence. The method further includes generating a first panoramic image. The generating includes processing a second sequence of images based on a point-spread function to mitigate blur associated with the continuous rotation to obtain a deblurred sequence of images, and processing the deblurred sequence based on a noise power spectral density to obtain a denoised sequence of images. The point-spread function is associated with the image sensor's rotation speed. The second sequence is based on the first sequence. The first panoramic image is based on the denoised sequence.Type: GrantFiled: November 2, 2020Date of Patent: August 16, 2022Assignee: Teledyne FLIR Commercial Systems, Inc.Inventors: Enrique Sanchez-Monge, Alessandro Foi
-
Patent number: 11410268Abstract: An image processing method includes: performing inspection on an image to be processed, and determining a contour line of a target object in the image to be processed and regions of the target object; determining, for a selected first region, first adjustment parameters of target pixel points in the first region according to set parameters; determining, for a second region adjacent to the first region, second adjustment parameters of reference pixel points in the second region; and adjusting the image to be processed according to the first and second adjustment parameters to determine an adjusted image.Type: GrantFiled: June 27, 2019Date of Patent: August 9, 2022Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTDInventors: Wentao Liu, Chen Qian
-
Patent number: 11410273Abstract: Described herein are systems and embodiments for multispectral image demosaicking using deep panchromatic image guided residual interpolation. Embodiments of a ResNet-based deep learning model are disclosed to reconstruct the full-resolution panchromatic image from multispectral filter array (MSFA) mosaic image. In one or more embodiments, the reconstructed deep panchromatic image (DPI) is deployed as the guide to recover the full-resolution multispectral image using a two-pass guided residual interpolation methodology. Experiment results demonstrate that the disclosed method embodiments outperform some state-of-the-art conventional and deep learning demosaicking methods both qualitatively and quantitatively.Type: GrantFiled: July 5, 2019Date of Patent: August 9, 2022Assignees: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.Inventors: Zhihong Pan, Baopu Li, Yingze Bao, Hsuchun Cheng
-
Patent number: 11398016Abstract: In an embodiment, a method includes receiving a low-light digital image; generating, by at least one processor, a resulting digital image by processing the low-light digital image with an encoder-decoder neural network comprising a plurality of convolutional layers classified into a downsampling stage and an upscaling stage, and a multi-scale context aggregating block configured to aggregate multi-scale context information of the low-light digital image and employed between the downsampling stage and the upscaling stage; and outputting, by the at least one processor, the resulting digital image to an output device.Type: GrantFiled: March 1, 2021Date of Patent: July 26, 2022Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Zibo Meng, Chiuman Ho
-
Patent number: 11393107Abstract: Some implementations of the disclosure are directed to capturing facial training data for one or more subjects, the captured facial training data including each of the one or more subject's facial skin geometry tracked over a plurality of times and the subject's corresponding jaw poses for each of those plurality of times; and using the captured facial training data to create a model that provides a mapping from skin motion to jaw motion. Additional implementations of the disclosure are directed to determining a facial skin geometry of a subject; using a model that provides a mapping from skin motion to jaw motion to predict a motion of the subject's jaw from a rest pose given the facial skin geometry; and determining a jaw pose of the subject using the predicted motion of the subject's jaw.Type: GrantFiled: July 12, 2019Date of Patent: July 19, 2022Assignees: Disney Enterprises, Inc., ETH Zürich (Eidgenössische Technische Hochschule Zürich)Inventors: Dominik Thabo Beeler, Derek Edward Bradley, Gaspard Zoss