Multiple Cameras Patents (Class 348/47)
  • Patent number: 11490064
    Abstract: A method of depth estimation utilizing heterogeneous cameras, comprising, homogenizing a first camera image and a second camera image based on a first camera calibration dataset and a second camera calibration dataset respectively, wherein the first camera image and second camera image are distortion corrected and are zoom compensated, determining an initial image pair rectification transformation matrix of the homogenized first camera image and the homogenized second camera image, determining a delta image pair rectification transformation matrix based on the initial image pair rectification transformation matrix, determining, a final image pair rectification transformation matrix based on the initial image pair rectification transformation matrix and the delta image pair rectification transformation matrix resulting in a final rectified image pair and disparity mapping the final rectified image pair based on a depth net regression.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: November 1, 2022
    Assignee: Black Sesame Technologies Inc.
    Inventors: Zuoguan Wang, Jizhang Shan
  • Patent number: 11483642
    Abstract: An earphone device having gesture recognition functions includes a gesture recognition element, a signal transmission unit, and a voice output element. The gesture recognition element includes a transmission unit, a reception chain and a processing unit. The transmission unit transmits a transmission signal to detect the gesture. The reception chain receives a gesture signal to generate a feature map data. The processing unit is coupled to the reception chain for receiving the feature map data and utilizes an identification algorithm to recognize gesture according to the feature map data to generate a gesture controlling signal. The signal transmission unit receives and transmits the gesture controlling signal to an electronic device. The processing unit receives a controlling action generated by the electronic device according to the gesture controlling signal via the signal transmission unit.
    Type: Grant
    Filed: July 27, 2021
    Date of Patent: October 25, 2022
    Assignee: KaiKuTek Inc.
    Inventors: Mike Chun-Hung Wang, Yu Feng Wu, Chieh Wu, Fang Li, Ling Ya Huang, Guan-Sian Wu, Wen-Jyi Hwang
  • Patent number: 11483505
    Abstract: In accordance with an embodiment of the present disclosure, an image synchronization device includes a light emitting source configured to emit light at intervals of a predetermined time, a sampling phase calibration circuit configured to calibrate a sampling phase of each of the first image sensor and the second image sensor on the basis of a light emitting timing of the light emitting source and a delay calibration circuit configured to generate delay information on the basis of a result of comparison between first image information transmitted from the first image sensor and second image information transmitted from the second image sensor.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: October 25, 2022
    Assignee: SK hynix Inc.
    Inventors: Chang Hyun Kim, Wan Jun Roh, Doo Bock Lee, Seung Hun Lee, Jae Jin Lee, Chun Seok Jeong
  • Patent number: 11482159
    Abstract: A display control device includes a display control unit configured to control a display unit to output a first image based on a first imaging signal and a second image based on a second imaging signal, the first imaging signal and the second imaging signal being output from an imaging element that outputs the first imaging signal by pixel-thinning for an entire angle of view and outputs the second imaging signal in all of pixels for a partial region in the entire angle of view.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: October 25, 2022
    Assignee: Sony Group Corporation
    Inventors: Masafumi Nagao, Tomoki Numata
  • Patent number: 11476000
    Abstract: Exemplified method and system facilitates monitoring and/or evaluation of disease or physiological state using mathematical analysis and machine learning analysis of a biopotential signal collected from a single electrode. The exemplified method and system creates, from data of a singularly measured biopotential signal, via a mathematical operation (i.e., via numeric fractional derivative calculation of the signal in the frequency domain), one or more mathematically-derived biopotential signals (e.g., virtual biopotential signals) that is used in combination with the measured biopotential signals to generate a multi-dimensional phase-space representation of the body (e.g., the heart). By mathematically modulating (e.g.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: October 18, 2022
    Assignee: Analytics For Life Inc.
    Inventors: Timothy Burton, Shyamlal Ramchandani, Sunny Gupta
  • Patent number: 11475586
    Abstract: Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: October 18, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Raymond Kirk Price, Michael Bleyer, Christopher Douglas Edmonds
  • Patent number: 11467322
    Abstract: The present embodiment relates to a dual camera module comprising: a first camera module including a first liquid lens and capturing a first image; and a second camera module including a second liquid lens and capturing a second image, wherein a viewing angle of the first camera module is smaller than a viewing angle of the second camera module, at least a part of the viewing angle of the first camera module is included in the viewing angle of the second camera module such that there is an overlapping area between the first image and the second image so as to enable a composite image formed by combining the first image and the second image to be generated, and when the first camera module is focused, a focal length of the first liquid lens is varied according to the distance between the first liquid lens and a subject, and when the second camera module is focused, a focal length of the second liquid lens is varied according to the distance between the second liquid lens and the subject.
    Type: Grant
    Filed: August 5, 2020
    Date of Patent: October 11, 2022
    Assignee: LG INNOTEK CO., LTD.
    Inventors: Young Seop Moon, Ui Jun Kim, Han Young Kim, Hyung Kim, Sang Hun Lee
  • Patent number: 11468672
    Abstract: The techniques disclosed herein improve the efficiency of a system by providing intelligent agents for managing data associated with objects that are displayed within mixed-reality and virtual-reality collaboration environments. Individual agents are configured to collect, analyze, and store data associated with individual objects in a shared view. The agents can identify real-world objects and virtual objects discussed in a meeting, collect information about each object and store the collected information in an associated database for access across multiple collaboration environments or communication sessions. The data can be shared between different communication sessions without requiring users to manually store and present a collection of content for each object. The intelligent agents and their associated databases can also persist through different communication sessions to enhance user engagement and improve productivity.
    Type: Grant
    Filed: March 15, 2021
    Date of Patent: October 11, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Jason Thomas Faulkner
  • Patent number: 11455773
    Abstract: Embodiments of the present disclosure provide a method and apparatus for processing an image. An embodiment of the method includes: acquiring a two-dimensional garment image, where the two-dimensional garment image includes a style identifier of a garment; selecting a three-dimensional garment model matching the style identifier from a pre-established set of three-dimensional garment models, wherein the three-dimensional garment model includes scatter points labeled thereon; labeling the two-dimensional garment image with scatter points based on a pre-established coordinate mapping relationship between the two-dimensional garment image and the three-dimensional garment model and the scatter points of the selected three-dimensional garment model; generating a three-dimensional garment image of the acquired two-dimensional garment image based on the selected three-dimensional garment model and a result of the labeling.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: September 27, 2022
    Assignees: Beijing Jingdong Shangke Information Technology Co., Ltd., Beijing Jingdong Century Trading Co., Ltd.
    Inventor: Jinping He
  • Patent number: 11451760
    Abstract: Systems having rolling shutter sensors with a plurality of sensor rows are configured for compensating for rolling shutter artifacts that result from different sensor rows in the plurality of sensor rows outputting sensor data at different times. The systems compensate for the rolling shutter artifacts by identifying readout timepoints for the plurality of sensor rows of the rolling shutter sensor while the rolling shutter sensor captures an image of an environment and identifying readout poses each readout timepoint, as well as obtaining a depth map based on the image. The depth map includes a plurality of different rows of depth data that correspond to the different sensor rows. The system further compensates for the rolling shutter artifacts by generating a 3D representation of the environment while unprojecting the rows of depth data into 3D space using the readout poses.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: September 20, 2022
    Inventors: Michael Bleyer, Christopher Douglas Edmonds, Raymond Kirk Price
  • Patent number: 11450140
    Abstract: In some examples, a method includes determining, by a computing device, a plurality of regions of interest of an image for independently processing relative to a plane of interest; determining, by the computing device and for respective regions of interest of the plurality of regions of interest, a distance from objects within the respective regions of interest to the plane of interest; and processing, by the computing device and independently for the respective regions of interest, the respective regions of interest of the image based on the determined distance from the objects to the plane of interest.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: September 20, 2022
    Assignee: 3M INNOVATIVE PROPERTIES COMPANY
    Inventors: Robert W. Shannon, Douglas S. Dunn, Glenn E. Casner, Jonathan D. Gandrud, Shannon D. Scott, Gautam Singh
  • Patent number: 11445162
    Abstract: The present disclosure provides a method for calibrating a binocular camera, including: S2 of extracting feature points from an image set 1 and an image set 2 taken at two points separated from each other by a predetermined distance; S3 of fitting Gaussian distribution parameters of each feature point, and extracting a desired value as a theoretical disparity; S4 of selecting a common feature point, and calculating the predetermined quantity of frames at a first theoretical distance and the predetermined quantity of frames at a second theoretical distance; S5 of performing Gaussian fitting on a difference between the theoretical distances of the common feature point, and extracting a variance as an evaluation index; S6 of determining whether the evaluation index is smaller than a threshold, if yes, terminating the calibration, and otherwise, proceeding to S7; and S7 of adjusting posture parameters of the binocular camera, and returning to S3.
    Type: Grant
    Filed: December 16, 2020
    Date of Patent: September 13, 2022
    Assignee: Beijing Smarter Eye Technology Co. Ltd.
    Inventors: Zhao Sun, Haitao Zhu, Yongcai Liu, Shanshan Pei, Xinliang Wang
  • Patent number: 11442263
    Abstract: Various approaches in which an image-recording parameter is varied between a plurality of images of an object and a stereo image pair is displayed on the basis of the images recorded thus are described. Here, in particular, the image-recording parameter can be a focal plane or an illumination direction.
    Type: Grant
    Filed: January 24, 2017
    Date of Patent: September 13, 2022
    Assignee: Carl Zeiss Microscopy GmbH
    Inventors: Christoph Husemann, Lars Stoppe, Tanja Teuber, Lars Omlor, Kai Wicker, Enrico Geissler, Senthil Kumar Lakshmanan
  • Patent number: 11432619
    Abstract: One aspect of this disclosure is a method comprising: receiving, from a camera, a video feed depicting a pair of feet and a scaling object; capturing, with the camera, images of the feet and the scaling object based on the video feed; identifying foot features in each captured image; determining camera positions for each captured image by triangulating the foot features; generating a point cloud in a three-dimensional space by positioning each foot feature in the three-dimensional space based on the camera positions; scaling the point cloud based on the scaling object; segmenting the point cloud into at least a right-foot cluster and a left-foot cluster; fitting a first three-dimensional morphable model to the right-foot cluster according to first foot parameters; and fitting a second three-dimensional morphable model to the left-foot cluster according to second foot parameters. Related systems, apparatus, and methods also are described.
    Type: Grant
    Filed: February 16, 2018
    Date of Patent: September 6, 2022
    Assignee: DIGITAL ANIMAL INTERACTIVE INC.
    Inventors: Jamie Roy Sherrah, Michael Henson, William Ryan Smith
  • Patent number: 11435750
    Abstract: The present teaching relates to a method, system, medium, and implementation of processing image data in an autonomous driving vehicle. Sensor data acquired by one or more types of sensors deployed on the vehicle are continuously received to provide different types of information about surrounding of the vehicle. Based on a first data set acquired by a first sensor of a first type of sensors at a specific time, an object is detected, where the first data set provides a first type of information with a first perspective. Depth information of the object is estimated via object centric stereo based on the object and a second data set, acquired at the specific time by a second sensor of the first type of sensors with a second perspective. The estimated depth information is further enhanced based on a third data set acquired by a third sensor of a second type of sensors at the specific time, providing a second type of information about the surrounding of the vehicle.
    Type: Grant
    Filed: December 28, 2017
    Date of Patent: September 6, 2022
    Assignee: PlusAI, Inc.
    Inventors: Hao Zheng, David Wanqian Liu, Timothy Patrick Daly, Jr.
  • Patent number: 11436746
    Abstract: This distance measuring camera contains a first optical system for collecting light from a subject to form a first subject image, a second optical system for collecting the light from the subject to form a second subject image, an imaging unit for imaging the first subject image formed by the first optical system and the second subject image formed by the second optical system, and a distance calculating part 4 for calculating a distance to the subject based on the first subject image and second subject image imaged by the imaging part. The distance calculating part 4 calculates the distance to the subject based on an image magnification ratio between a magnification of the first subject image and a magnification of the second subject image.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: September 6, 2022
    Assignee: MITSUMI ELECTRIC CO., LTD.
    Inventor: Satoshi Ajiki
  • Patent number: 11438496
    Abstract: A multi-camera module includes a plurality of camera units, and a single case coupled to the plurality of camera units, wherein the case includes an upper surface portion surrounding an upper portion of the plurality of camera units, and wherein the upper surface portion includes a groove, the groove having different depths in accordance with a position corresponding to the plurality of camera units.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: September 6, 2022
    Assignee: Samsung Electro-Mechanics Co., Ltd.
    Inventor: Han Jun Jung
  • Patent number: 11425359
    Abstract: The present invention provides a depth image generation apparatus comprising: a light source for generating light to be emitted toward an object in order to solve an SNR problem caused by resolution degradation and an insufficient amount of received light, while not increasing the amount of emitted light when photographing a remote object; a first optical system for emitting, as a dot pattern and at the object, the light generated by the light source; an image sensor receiving the light reflected by the object, so as to convert the received light into an electrical signal; an image processor for acquiring depth data through the electrical signal; and a control unit connected to the light source, the first optical system, the image sensor and the image processor, wherein the control unit controls the first optical system so as to scan the object by moving the dot pattern in a preset pattern.
    Type: Grant
    Filed: March 14, 2019
    Date of Patent: August 23, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Chansung Jeong, Hyojin Nam, Sangkeun Lee, Yongho Cho
  • Patent number: 11421866
    Abstract: The social distance lighting system generates an illuminated image. The social distance lighting system is configured for use with the floor of a chamber. The floor is the inferior horizontally oriented supporting surface of the chamber. The illuminated image projected by the social distance lighting system presents indicia on the floor of the chamber. The indicia presented by the social distance lighting system generates a sentiment that marks the boundaries and center points that allow people to maintain a proper social distance. The social distance lighting system comprises a lamp, a plurality of LEDs, and a remote control. The plurality of LEDs mount in the lamp. The plurality of LEDs generate the illuminated image. The remote control controls the operation of the plurality of LEDs.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: August 23, 2022
    Inventor: Kenneth Balliet
  • Patent number: 11425361
    Abstract: An image display device includes a display optical system, a base including a display optical system guide unit configured to guide the display optical system in a substantially linear direction, a main body exterior configured to house the base, a display exterior configured to cover the display optical system, and a display exterior guide unit configured to guide the display exterior in a substantially linear direction relative to the main body exterior, wherein the display optical system is engaged with the display exterior to move in conjunction with the display exterior, and wherein the base and the main body exterior are engaged with each other by an engagement mechanism configured to suppress transmission of an external force applied to the main body exterior to the base.
    Type: Grant
    Filed: January 25, 2021
    Date of Patent: August 23, 2022
    Assignee: Canon Kabushiki Kaisha
    Inventors: Marina Kitakami, Toshiharu Kawai
  • Patent number: 11422546
    Abstract: A method includes fusing multi-modal sensor data from a plurality of sensors having different modalities. At least one region of interest is detected in the multi-modal sensor data. One or more patches of interest are detected in the multi-modal sensor data based on detecting the at least one region of interest. A model that uses a deep convolutional neural network is applied to the one or more patches of interest. Post-processing of a result of applying the model is performed to produce a post-processing result for the one or more patches of interest. A perception indication of the post-processing result is output.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: August 23, 2022
    Assignee: RAYTHEON TECHNOLOGIES CORPORATION
    Inventors: Michael J. Giering, Kishore K. Reddy, Vivek Venugopalan, Soumik Sarkar
  • Patent number: 11408740
    Abstract: A part of measurement data (182) is compared with a part of map data (181) based on a feature. The part of the map data is updated based on a comparison result. The map data is updated by reflecting the updated part of the map data in the map data.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: August 9, 2022
    Assignee: Mitsubishi Electric Corporation
    Inventor: Mitsunobu Yoshida
  • Patent number: 11402474
    Abstract: A compensation device for a biaxial LIDAR system includes two holographic optical elements, which are locatable between a receiving optical system and a detector element, and which are designed to compensate for a parallax effect of the biaxial LIDAR system, incident light being guidable onto the detector element with the aid of the two holographic optical elements.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: August 2, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Annette Frederiksen, Stefanie Hartmann
  • Patent number: 11398053
    Abstract: The present invention discloses a multispectral camera external parameter self-calibration algorithm based on edge features, and belongs to the field of image processing and computer vision. Because a visible light camera and an infrared camera belong to different modes, fewer satisfactory point pairs are obtained by directly extracting and matching feature points. In order to solve the problem, the method starts from the edge features, and finds an optimal corresponding position of an infrared image on a visible light image through edge extraction and matching. In this way, a search range is reduced and the number of the satisfactory matched point pairs is increased, thereby more effectively conducting joint self-calibration on the infrared camera and the visible light camera. The operation is simple and results are accurate.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: July 26, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Wei Zhong, Boqian Liu, Haojie Li, Zhihui Wang, Risheng Liu, Xin Fan, Zhongxuan Luo
  • Patent number: 11395714
    Abstract: A device for variable polarization by an illumination device for illuminating organic tissue. The device includes a first set of one or more light emitting diodes (LEDs), a first polarizer arranged to polarize light emitted from the first set of LEDs in a first polarization direction, a second set of one or more LEDs, a second polarizer arranged to polarize light emitted from the second set of LEDs in a second polarization direction. A lens is arranged to collect light from organic tissue illuminated by the first and/or second sets of LEDs including a viewing polarizer arranged to polarize the light collected from the organic tissue in the second polarization direction. A signal generator is operable to signal one or more drivers for driving one of the first and second sets according a signal value and driving the other of the first and second sets according to an inverse value thereof.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: July 26, 2022
    Assignee: DermLite LLC
    Inventors: Nizar Mullani, Thorsten Trotzenberg, Gregory Paul Lozano-Buhl
  • Patent number: 11397449
    Abstract: An electronic device may include a glass housing member that includes an upper portion defining a display area, a lower portion defining an input area, and a transition portion joining the upper portion and the lower portion and defining a continuous, curved surface between the upper portion and the lower portion. The electronic device may include a display coupled to the glass housing member and configured to provide a visual output at the display area. The electronic device may include an input device coupled to the glass housing member and configured to detect inputs at the input area. The electronic device may include a support structure coupled to the glass housing member and configured to support the computing device.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: July 26, 2022
    Assignee: APPLE INC.
    Inventors: Keith J. Hendren, Paul X. Wang, Adam T. Garelli, Brett W. Degner, Christiaan A. Ligtenberg, Dinesh C. Mathew
  • Patent number: 11394867
    Abstract: The lens apparatus according to the present invention includes an optical system including a plurality of lens units configured to move in adjusting an in-focus state of a subject image, a detection unit configured to detect respective positions of the plurality of lens units, an acquisition unit configured to acquire position information regarding a single position representing the respective positions of the plurality of lens units corresponding to a current subject distance, based on the respective positions of the plurality of lens units and relational information indicating a relation between a subject distance and the respective positions of the plurality of lens units, and a control unit configured to control the respective positions of the plurality of lens units based on the position information acquired by the acquisition unit.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: July 19, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Minoru Uchiyama
  • Patent number: 11389122
    Abstract: The disclosure relates to methods, systems, and computer program products for registering a set of X-ray images with a navigation system. In the method, by a camera, at least one image of a reference object is recorded and, on the basis thereof, a current posture of the reference object is determined. It is then checked whether this posture fulfils a specified criterion, which also on an arrangement of the reference object at least partially outside a planned reconstruction volume of the X-ray device, predicts an expected successful registration. On non-fulfillment of the criterion, a signal for adaptation of a relative alignment between the X-ray device and the reference object is automatically output. On fulfillment of the criterion, the X-ray images of the reference object are recorded, the posture of the reference object is determined, and the registration is carried out using the determined postures as reference.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: July 19, 2022
    Assignee: Siemens Healthcare GmbH
    Inventors: Alois Regensburger, Oliver Hornung, Martin Ostermeier
  • Patent number: 11392133
    Abstract: The present teaching relates to a method, system, medium, and implementation of processing image data in an autonomous driving vehicle. Sensor data acquired by one or more types of sensors deployed on the vehicle are continuously received. The sensor data provide different information about surrounding of the vehicle. Based on a first data set acquired by a first sensor of a first type of the one or more types of sensors at a specific time, an object is detected, where the first data set provides a first type of information about the surrounding of the vehicle. Depth information of the object is then estimated via object centric stereo at object level based on the object detected as well as a second data set acquired by a second sensor of the first type of the one or more types of sensors at the specific time. The second data set provides the first type of information about the surrounding of the vehicle with a different perspective as compared with the first data set.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: July 19, 2022
    Assignee: PlusAI, Inc.
    Inventors: Hao Zheng, David Wanqian Liu, Timothy Patrick Daly, Jr.
  • Patent number: 11386355
    Abstract: A device implementing a system for providing predicted RGB images includes at least one processor configured to obtain an infrared image of a subject, and to obtain a reference RGB image of the subject. The at least one processor is further configured to provide the infrared image and the reference RGB image to a machine learning model, the machine learning model having been trained to output predicted RGB images of subjects based on infrared images and reference RGB images of the subjects. The at least one processor is further configured to provide a predicted RGB image of the subject based on output by the machine learning model.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: July 12, 2022
    Assignee: Apple Inc.
    Inventors: Carlos E. Guestrin, Leon A. Gatys, Shreyas V. Joshi, Gustav M. Larsson, Kory R. Watson, Srikrishna Sridhar, Karla P. Vega, Shawn R. Scully, Thorsten Gernoth, Onur C Hamsici
  • Patent number: 11379957
    Abstract: Methods, systems, and devices for head wearable display devices are described. A device may capture a set of images over a set of orientations using a set of cameras positioned on an outward facing surface of the device. The set of images include a first subset of images captured by a first camera and a second subset of images captured by a second camera. The device may detect a set of facial features in each of the first subset of images and the second subset of images, and measure a set of inter-pupillary distances over the set of orientations based on the set of facial features in each of the first and second subset of images. The device may determine an inter-pupillary distance parameter based on aggregating the set of inter-pupillary distances over the set of orientations. The device may calibrate based on the inter-pupillary distance parameter.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: July 5, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Ramesh Chandrasekhar, Abhijeet Bisain
  • Patent number: 11380037
    Abstract: A method for generating a virtual operating object is provided for an electronic device. The method includes obtaining a target portrait picture on a configuration interface of a target application on the electronic device; obtaining a first picture region of the target portrait picture where a first part presents and a first part feature of the first part; determining a target part matching the first picture region in a pre-established feature image library, the target part being a part of a to-be-generated target virtual operating object in the target application; determining a target feature parameter matching the first part feature in a value range of a target part feature of the target part pre-recorded in the feature image library; and generating the target part in the target application according to the target feature parameter.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: July 5, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Chang Guo
  • Patent number: 11372518
    Abstract: An augmented reality system that includes processors and storage devices storing instructions. The instructions configure the processors to perform operations. The operations include identifying a writing object in a video feed being displayed in an augmented reality viewer, identify a tip of the writing object based on a contour of the writing object, and tracking movements of the tip in the augmented reality viewer. The operations may also include generating a virtual file, the virtual file storing the tracked movements and generating a text file by performing an image recognition operation associating the tracked movements stored in the virtual file with one or more characters.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: June 28, 2022
    Assignee: Capital One Services, LLC
    Inventors: Joshua Edwards, Jacob Learned, Eric Loucks
  • Patent number: 11375173
    Abstract: An image capture device having multiple image sensors having overlapping fields of view that aligns the image sensors based on images captured by image sensors. A pixel shift is identified between the images. Based on the identified pixel shift, a calibration is applied to one or more of the image sensors. To determine the pixel shift, a processor applies correlation methods including edge matching. Calibrating the image sensors may include adjusting a read window on an image sensor. The pixel shift can also be used to determine a time lag, which can be used to synchronize subsequent image captures.
    Type: Grant
    Filed: October 2, 2020
    Date of Patent: June 28, 2022
    Assignee: GoPro, Inc.
    Inventors: Timothy Macmillan, Scott Patrick Campbell, David A. Newman, Yajie Sun
  • Patent number: 11373325
    Abstract: Some embodiments of the invention provide a novel method for training a multi-layer node network to reliably determine depth based on a plurality of input sources (e.g., cameras, microphones, etc.) that may be arranged with deviations from an ideal alignment or placement. Some embodiments train the multi-layer network using a set of inputs generated with random misalignments incorporated into the training set. In some embodiments, the training set includes (i) a synthetically generated training set based on a three-dimensional ground truth model as it would be sensed by a sensor array from different positions and with different deviations from ideal alignment and placement, and/or (ii) a training set generated by a set of actual sensor arrays augmented with an additional sensor (e.g., additional camera or time of flight measurement device such as lidar) to collect ground truth data.
    Type: Grant
    Filed: October 2, 2019
    Date of Patent: June 28, 2022
    Assignee: PERCEIVE CORPORATION
    Inventors: Andrew Mihal, Steven Teig
  • Patent number: 11368663
    Abstract: An apparatus comprises a determiner (305) which determines a first-eye and a second eye view pose. A receiver (301) receives a reference first-eye image with associated depth values and a reference second-eye image with associated depth values, the reference first-eye image being for a first-eye reference pose and the reference second-eye image being for a second-eye reference pose. A depth processor (311) determines a reference depth value, and modifiers (307) generate modified depth values by reducing a difference between the received depth values and the reference depth value by an amount that depends on a difference between the second or first-eye view pose and the second or first-eye reference pose. A synthesizer (303) synthesizes an output first-eye image for the first-eye view pose by view shifting the reference first-eye image and an output second-eye image for the second-eye view pose by view shifting the reference second-eye image based on the modified depth values.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: June 21, 2022
    Assignee: Koninklijke Philips N.V.
    Inventor: Christiaan Varekamp
  • Patent number: 11368666
    Abstract: A control is performed such that a speed of change of the virtual viewpoint which is changed in accordance with acceptance of an input according to a specific user operation during playback of the virtual viewpoint image at a first playback speed becomes lower than a speed of change of the virtual viewpoint which is changed in accordance with acceptance of an input according to the specific user operation during playback of the virtual viewpoint image at a second playback speed higher than the first playback speed.
    Type: Grant
    Filed: December 24, 2020
    Date of Patent: June 21, 2022
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Keigo Yoneda
  • Patent number: 11361513
    Abstract: This application describes a head-mounted display (HMD) for use in virtual-reality (VR) environments. The systems and methods described herein may determine information about a real-world environment surrounding the user, a location of the user within the real-world environment, and/or a pose of the user within the real-world environment. Such information may allow the HMD to display images of the real-world environment in a pass-through manner and without detracting the user from the VR environment. In some instances, the HMD may pass-through images of the real-world environment based on one or more triggering events.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: June 14, 2022
    Assignee: Valve Corporation
    Inventors: Charles N. Lohr, Gordon Wayne Stoll
  • Patent number: 11356619
    Abstract: Embodiments of this application disclose methods, systems, and devices for video synthesis. In one aspect, a method comprises obtaining a plurality of frames corresponding to source image information of a first to-be-synthesized video, each frame of the source image information. The method also comprises obtaining a plurality of frames corresponding to target image information of a second to-be-synthesized video. For each frame of the plurality of frames corresponding to the target image information of the second to-be-synthesized video, the method comprises fusing a respective source image from the first to-be-synthesized video, a corresponding source motion key point, and a respective target motion key point corresponding to the frame using a pre-trained video synthesis model, and generating a respective output image in accordance with the fusing. The method further comprises repeating the fusing and the generating steps for the second to-be-synthesized video to produce a synthesized video.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: June 7, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Haozhi Huang, Kun Cheng, Chun Yuan, Wei Liu
  • Patent number: 11350073
    Abstract: The present invention discloses a disparity image stitching and visualization method based on multiple pairs of binocular cameras. A calibration algorithm is used to solve the positional relationship between binocular cameras, and the prior information is used to solve a homography matrix between images; internal parameters and external parameters of the cameras are used to perform camera coordinate system transformation of depth images; the graph cut algorithm has high time complexity and depends on the number of nodes in a graph; the present invention divides the images into layers, and solutions are obtained layer by layer and iterated; then the homography matrix is used to perform image coordinate system transformation of the depth images, and a stitching seam is synthesized to realize seamless panoramic depth image stitching; and finally, depth information of a disparity image is superimposed on a visible light image.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: May 31, 2022
    Assignee: DALIAN UNIVERSITY OF TECHNOLOGY
    Inventors: Xin Fan, Risheng Liu, Zhuoxiao Li, Wei Zhong, Zhongxuan Luo
  • Patent number: 11348257
    Abstract: Aspects of the present disclosure relate to systems, devices and methods for performing a surgical step or surgical procedure with visual guidance using an optical head mounted display. Aspects of the present disclosure relate to systems, devices and methods for displaying, placing, fitting, sizing, selecting, aligning, moving a virtual implant on a physical anatomic structure of a patient and, optionally, modifying or changing the displaying, placing, fitting, sizing, selecting, aligning, moving, for example based on kinematic information.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: May 31, 2022
    Inventor: Philipp K. Lang
  • Patent number: 11343093
    Abstract: An example method includes, in response to receiving a byte array including process data, determining whether auxiliary data is to be transmitted from a field device based on a counter, the auxiliary data including an encryption key identifier and an initialization vector, when auxiliary data is to be transmitted, transmitting a first data packet including the auxiliary data to the remote device, and determining a value for a source bit based on a type of connection between the field device and the remote device, the source bit and the counter included in associated data. The method further includes generating a nonce value based on the source bit and the initialization vector, encrypting a payload including the byte array based on the encryption key identifier and the nonce value, and transmitting a second data packet to the remote device, the second data packet including the associated data and the encrypted payload.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: May 24, 2022
    Assignee: Fisher Controls International LLC
    Inventor: Kenneth William Junk
  • Patent number: 11334178
    Abstract: Systems and methods to enact machine-based, substantially simultaneous, two-handed interactions with one or more displayed virtual objects. Bimanual interactions may be implemented by combining an ability to specify one or more locations on a touch-sensitive display using one or more digits of a first hand with an ability to monitor a portable, handheld controller manipulated by the other hand. Alternatively or in addition, pointing by the first hand to the one or more locations on a display may be enhanced by a stylus or other pointing device. The handheld controller may be tracked within camera-acquired images by following camera-trackable controller components and/or by acquiring measurements from one or more embedded internal measurement units (IMUs). Optionally, one or more switches or sensors may be included within the handheld controller, operable by one or more digits of the second hand to enable alternative virtual object display and/or menu selections during bimanual interactions.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: May 17, 2022
    Assignee: KINOO, Inc.
    Inventors: Lewis James Marggraff, Nelson George Publicover
  • Patent number: 11335063
    Abstract: Described herein are methods and systems for generating multiple maps during object scanning for 3D object reconstruction. A sensor device captures RGB images and depth maps of objects in a scene. A computing device receives the RGB images and the depth maps from the sensor device. The computing device creates a first map using at least a portion of the depth maps, a second map using at least a portion of the depth maps, and a third map using at least a portion of the depth maps. The computing device finds key point matches among the first map, the second map, and the third map. The computing device performs bundle adjustment on the first map, the second map, and the third map using the matched key points to generate a final map. The computing device generates a 3D mesh of the object using the final map.
    Type: Grant
    Filed: December 30, 2020
    Date of Patent: May 17, 2022
    Assignee: VanGogh Imaging, Inc.
    Inventors: Ken Lee, Jun Yin, Craig Cambias
  • Patent number: 11334938
    Abstract: A continuous virtual fitting system for enabling a continuous virtual fitting and custom configuration of products and prototypes according to a procedure of processing and storing virtual fitting images with placeholder configurations in virtual fitting catalogs, creating automatic virtual fitting of variations of the products and prototypes using the virtual fitting catalogs.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: May 17, 2022
    Inventor: Grace Tang
  • Patent number: 11330193
    Abstract: An imaging device for imaging of a local area surrounding the imaging device. The imaging device includes a lens assembly, a filtering element and a detector. The lens assembly is configured to receive light from a local area surrounding the imaging device and to direct at least a portion of the received light to the detector. The filtering element is placed in the imaging device within the lens assembly such that light is incident at a surface of the filtering element within a range of angles determined by a design range of angles at which the filtering element is designed to filter light. The detector is configured to capture image(s) of the local area including the filtered light. The imaging device can be integrated into a depth camera assembly for determining depth information of object(s) in the local area based on the captured image(s).
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: May 10, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Andrew Matthew Bardagjy, Joseph Duggan, Cina Hazegh, Fei Liu, Mark Timothy Sullivan, Simon Morris Shand Weiss
  • Patent number: 11330204
    Abstract: Examples are described of automatic exposure timing synchronization. An imaging system includes a first image sensor configured to capture a first image according to a first exposure timing, including by exposing first region of interest (ROI) image data at the first image sensor for a first ROI exposure time period. Based on the first exposure timing, the imaging system sets a second exposure timing for a second image sensor to capture a second image. Capture of the second image according to the second exposure timing includes exposure of second ROI image data at the second image sensor for a second ROI exposure time period. The second exposure timing may be set so that the start of the second ROI exposure time period aligns with the start of the first ROI exposure time period, and/or so that the first and second ROI exposure time periods overlap.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: May 10, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Jeyaprakash Soundrapandian, Ramesh Ramaswamy, Sureshnaidu Laveti, Rajakumar Govindaram
  • Patent number: 11321858
    Abstract: A distance image generation device includes a light projection unit for projecting reference light onto a subject, a light reception unit having a plurality of two-dimensionally arrayed pixels, an optical system for guiding light from the subject to the light reception unit, an influence calculation means for calculating, based on an amount of light received by a target pixel and peripheral pixels thereof among the plurality of pixels, an influence of optical phenomena on the target pixel and the peripheral pixels, an impact calculation means for calculating the impact exerted by the peripheral pixels on the target pixel based on the influence, and a distance image generation means for generating a distance image of the subject based on the impact.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: May 3, 2022
    Assignee: FANUC CORPORATION
    Inventors: Yuuki Takahashi, Atsushi Watanabe, Minoru Nakamura
  • Patent number: 11320894
    Abstract: Apparatus, a method and a computer program are provided. The apparatus includes circuitry for causing rendering of mediated reality content to a user, wherein the mediated reality content includes virtual visual content rendered on a display of a hovering drone. The apparatus also includes circuitry for determining a real location of the user in real space. The apparatus further includes circuitry for dynamically adjusting a real location of the hovering drone, relative to the determined real location of the user, based at least in part on at least one characteristic of the mediated reality content rendered to the user.
    Type: Grant
    Filed: June 18, 2019
    Date of Patent: May 3, 2022
    Assignee: Nokia Technologies Oy
    Inventors: Jussi Leppanen, Mikko-Ville Laitinen, Arto Lehtiniemi, Antti Eronen
  • Patent number: 11318954
    Abstract: A movable carrier auxiliary system includes an environment detecting device, a state detecting device, and a control device. The environment detecting device includes at least one image capturing module and an operation module. The image capturing module captures an environment image in a traveling direction of the movable carrier. The operation module detects whether there is at least one of a target carrier and a lane marking in the environment image captured in the traveling direction for generating a detection signal. The state detecting device detects a moving state of the movable carrier and generating a state signal. The control device continuously receives the detection signal and the state signal, and controls the movable carrier to follow the target carrier or the lane marking according to the detection signal and the state signal upon receiving the detection signal that there is the target carrier or the lane marking in the environment image.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: May 3, 2022
    Assignee: ABILITY OPTO-ELECTRONICS TECHNOLOGY CO., LTD.
    Inventors: Yeong-Ming Chang, Chien-Hsun Lai, Yao-Wei Liu