3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 11120613
    Abstract: To achieve a balance between responsiveness of image display with respect to movement of a viewing point, and image quality. Reference viewing points are set in respect of a space containing an object to be displayed, and images of the space as viewed from these respective reference viewing points are created as reference images. Meanwhile, when pixel values of display images from a virtual camera are determined, reference images are selected wherein a point on the object represented by the pixel in question is represented as an image, and the values of these pixels are combined using a rule based on the positional relationship etc. of the reference viewing points with the virtual camera.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: September 14, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Masakazu Suzuoki, Yuki Karasawa
  • Patent number: 11122249
    Abstract: Systems and methods are disclosed that dynamically and laterally shift each virtual object displayed by an augmented reality headset by a respective distance as the respective virtual object is displayed to change virtual depth from a first virtual depth to a second virtual depth. The respective distance may be determined based on a lateral distance between a first convergence vector of a user's eye with the respective virtual object at the first virtual depth and a second convergence vector of the user's eye with the respective virtual object at the second virtual depth along the display, and may be based on an interpupillary distance. In this manner, display of the virtual object may be adjusted such that the gazes of the user's eyes may converge where the virtual object appears to be.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: September 14, 2021
    Assignee: Universal City Studios LLC
    Inventors: Yu-Jen Lin, Patrick John Goergen, Martin Evan Graham
  • Patent number: 11120526
    Abstract: A mobile device can implement a neural network-based domain transfer scheme to modify an image in a first domain appearance to a second domain appearance. The domain transfer scheme can be configured to detect an object in the image, apply an effect to the image, and blend the image using color space adjustments and blending schemes to generate a realistic result image. The domain transfer scheme can further be configured to efficiently execute on the constrained device by removing operational layers based on resources available on the mobile device.
    Type: Grant
    Filed: April 5, 2019
    Date of Patent: September 14, 2021
    Assignee: Snap Inc.
    Inventors: Sergey Demyanov, Aleksei Podkin, Aleksei Stoliar, Vadim Velicodnii, Fedor Zhdanov
  • Patent number: 11117262
    Abstract: One embodiment can provide an intelligent robotic system. The intelligent robotic system can include at least one multi-axis robotic arm, at least one gripper attached to the multi-axis robotic arm for picking up a component, a machine vision system comprising at least a three-dimensional (3D) surfacing-imaging module for detecting 3D pose information associated with the component, and a control module configured to control movements of the multi-axis robotic arm and the gripper based on the detected 3D pose of the component.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 14, 2021
    Assignee: EBOTS INC.
    Inventors: Kai C. Yung, Zheng Xu, Jianming Fu
  • Patent number: 11122247
    Abstract: A depth map generation device capable of correcting occlusion includes at least two image capture pairs and a depth map generator. The at least two image capture pairs is used for capturing a plurality of images. The depth map generator is coupled to the two image capture pairs for generating a first depth map and a second depth map according to the plurality of images, wherein when the first depth map includes a first occlusion region and a first non-occlusion region, the depth map generator corrects the first occlusion region according to the second depth map.
    Type: Grant
    Filed: March 27, 2018
    Date of Patent: September 14, 2021
    Assignee: eYs3D Microelectronics, Co.
    Inventor: Chi-Feng Lee
  • Patent number: 11120392
    Abstract: A system and method for calibrating location and orientation of a directional light source relative to a field of view of an optical device includes a directional light source that directs light to points on a virtual grid overlaying the field of view of the optical device. The optical device captures an image for each point on the virtual grid at which the directional light source directs light. A light dot is located in a plurality of captured images. The location and orientation of the directional light source are calibrated relative to the field of view of the optical device based on coordinates of each located light dot and on relative coordinates of the optical device.
    Type: Grant
    Filed: June 11, 2019
    Date of Patent: September 14, 2021
    Assignee: POSITION IMAGING, INC.
    Inventors: Drew Anthony Schena, Guohua Min
  • Patent number: 11120624
    Abstract: A three-dimensional head portrait generating method executes on an electronic device. The three-dimensional head portrait generating method establishes a three-dimensional head portrait model with a plurality of feature points according to front face information, wherein feature points form a plurality of first grids on the three-dimensional head portrait model; maps a first part of the feature points of the three-dimensional head portrait model to a left face image to form a plurality of second grids on the left face image; maps a second part of the feature points of the three-dimensional head portrait model to a right face image to form a plurality of third grids on the right face image; and superimposes the left face image and the right face image onto the three-dimensional head portrait model according to a correspondence among the first grids, the second grids and the third grids, to generate a three-dimensional head portrait.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: September 14, 2021
    Assignee: ASUSTEK COMPUTER INC.
    Inventors: Guan-De Lee, Hao-Yuan Kuo
  • Patent number: 11110602
    Abstract: A robot control device that controls a robot and includes a processor which extracts a contour of a target based on an image of the target captured by an imaging device, generates a point sequence corresponding to the contour, and converts coordinates of the point sequence into coordinates in a robot coordinate system. Further, a robot control device that controls a robot includes a processor which extracts a contour of a target based on an image of the target captured by an imaging device and a predetermined instruction, generates a point sequence corresponding to the contour, and converts coordinates of the point sequence into coordinates in a robot coordinate system.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: September 7, 2021
    Inventors: Seiji Aiso, Yukihiro Yamaguchi
  • Patent number: 11107229
    Abstract: An image processing method and apparatus is disclosed. The image processing method includes receiving an input image and estimating a depth of a target based on a position, a size, and a class of the target in the input image.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: August 31, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Lin Ma, Wonhee Lee, Chun Wang, Guangwei Wang, Minsu Ahn, Tianhao Gao, Sung Hoon Hong, Zhihua Liu
  • Patent number: 11107268
    Abstract: The techniques described herein relate to methods, apparatus, and computer readable media for efficiently processing data of initial correspondence assignments, e.g., for three-dimensional reconstruction of an object. In some aspects, the system includes a processor configured to perform the acts of receiving a first set of images of a scene and a second set of images of the scene, determining a first pixel fingerprint based on the first set of images and a second pixel fingerprint based on the second set of images, generating a first binary pixel fingerprint based on the first pixel fingerprint and a second binary pixel fingerprint based on the second pixel fingerprint, and determining whether there exists a stereo correspondence between the first pixel fingerprint and the second pixel fingerprint at least in part based on comparing the first binary pixel fingerprint and the second binary pixel fingerprint.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: August 31, 2021
    Assignee: Cognex Corporation
    Inventors: Marcus Große, Martin Schaffer, Simon Willeke, Bastian Harendt
  • Patent number: 11103664
    Abstract: Systems and methods permit generation of a digital scan of a user's face such as for obtaining of a patient respiratory mask, or component(s) thereof, based on the digital scan. The method may include: receiving video data comprising a plurality of video frames of the user's face taken from a plurality of angles relative to the user's face, generating a three-dimensional representation of a surface of the user's face based on the plurality of video frames, receiving scale estimation data associated with the received video data, the scale estimation data indicative of a relative size of the user's face, and scaling the digital three-dimensional representation of the user's face based on the scale estimation data. In some aspects, the scale estimation data may be derived from motion information collected by the same device that collects the scan of the user's face.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: August 31, 2021
    Inventors: Simon Michael Lucey, Benjamin Peter Johnston, Priyanshu Gupta, Tzu-Chin Yu
  • Patent number: 11107280
    Abstract: In one embodiment, a method includes by one or more computing devices, accessing an image including a hand of a user of a head-mounted display. The method includes generating, from at least the image, a virtual object representation of the hand. The virtual object representation is defined in a virtual environment. The method includes rendering, based on the virtual object representation and at least one other virtual object in the virtual environment, an image of the virtual environment from a viewpoint of the user. The image includes a set of pixels that corresponds to a portion of the virtual object representation that is visible from the viewpoint of the user. The method includes providing, to a set of light emitters of the head-mounted display, instructions to display the image. The set of pixels in the image causes the light emitters at one or more positions to be unilluminated.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: August 31, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Steve John Clohset, Warren Andrew Hunt
  • Patent number: 11107247
    Abstract: In a calibration chart device 20, a peak detector 32 detects, from plural infrared images shot at respective movement positions by an infrared camera IRC by sequentially moving a marker whose distribution of an infrared-ray radiation amount in a moving direction is a unimodal distribution in a first direction and plural infrared images shot at respective movement positions by the infrared camera IRC by sequentially moving the marker in a second direction different from the first direction, a position of the marker at which a pixel value is maximized, for each pixel. A calibration processor 36 calculates a camera parameter by using the position of the marker detected by the peak detector 32 for each pixel. With this configuration, calibration of the infrared camera can be performed with ease.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: August 31, 2021
    Assignee: SONY CORPORATION
    Inventor: Hideki Oyaizu
  • Patent number: 11104345
    Abstract: Methods, systems, and media for determining characteristics of roads are provided. In some embodiments, the method comprises: receiving, at a first time point, first camera information from a camera associated with a vehicle; identifying a first position of a feature of an object in front of the vehicle based on the first camera information; receiving, at an additional time point, additional camera information from the camera; identifying an updated position of the feature of the object in front of the vehicle based on the additional camera information; determining a relative motion of the feature of the object in front of the vehicle based on the first position and the updated position; and determining a characteristic of a road the vehicle is on based on the relative motion of the feature of the object in front of the vehicle.
    Type: Grant
    Filed: April 18, 2018
    Date of Patent: August 31, 2021
    Assignee: Rivian IP Holdings, LLC
    Inventors: Paul Theodosis, Sabarish Gurusubramanian
  • Patent number: 11106203
    Abstract: A method for generating a first person view (FPV) of an environment includes, with aid of one or more processors individually or collectively, analyzing stereoscopic video data of the environment to determine environmental information and generating augmented stereoscopic video data of the environment by fusing the stereoscopic video data and the environmental information.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: August 31, 2021
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Zhicong Huang, Cong Zhao, Shuo Yang, Yannan Wu, Kang Yang, Guyue Zhou
  • Patent number: 11099645
    Abstract: A method for rendering computer graphics based on saccade detection is provided. One embodiment of the method includes rendering a computer simulated scene for display to a user, detecting an onset of a saccade that causes saccadic masking in an eye movement of the user viewing the computer simulated scene, and reducing a computing resource used for rendering frames of the computer simulated scene during at least a portion of a duration of the saccade. Systems perform similar steps, and non-transitory computer readable storage mediums each storing one or more computer programs are also provided.
    Type: Grant
    Filed: March 3, 2020
    Date of Patent: August 24, 2021
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Dominic Saul Mallinson
  • Patent number: 11100641
    Abstract: Methods for estimating plant age are provided. A first dataset comprising a first plurality of images of a first plurality of plants is obtained, including for each plant of the first plurality of plants, one or more first plant features. A second dataset comprising, for each plant of the first plurality of plants, a respective second plant location and plant age is obtained. The one or more first plant features and plant age for plants in the first plurality of plants are used to train a model for plant age determination. A third dataset comprising a second plurality of images of a second agricultural plot is obtained, including for a second plurality of plants in the second agricultural plot one or more corresponding second plant features. Ages for plants in the second plurality of plants are estimated by inputting one or more respective second plant features into the trained model.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: August 24, 2021
    Assignee: Aerobotics (Pty) Ltd
    Inventors: Michael Malahe, Benjamin Chaim Meltzer, Stuart Allan Van Der Veen, Garth Samuel Dominic Wasson
  • Patent number: 11100661
    Abstract: A method for depth mapping includes receiving optical radiation reflected from multiple points on an object and processing the received optical radiation to generate depth data including multiple candidate depth coordinates for each of a plurality of pixels and respective measures of confidence associated with the candidate depth coordinates. One of the candidate depth coordinates is selected at each of the plurality of the pixels responsively to the respective measures of confidence. A depth map of the object is output, including the selected one of the candidate depth coordinates at each of the plurality of the pixels.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: August 24, 2021
    Assignee: APPLE INC.
    Inventors: Alexander Shpunt, Gerard Medioni, Daniel Cohen, Erez Sali, Ronen Deitch
  • Patent number: 11100806
    Abstract: A multi-spectral vehicular system for providing pre-collision alerts, comprising two pairs of stereoscopic infrared (IR) and visible light (VL) sensors, each of which providing acquired image streams from a mutual field of view, which are synchronized to provide stereoscopic vision; a data fusion module for mutually processing the data streams, to detect objects within the field of view and calculating distances to detected objects; a cellular based communication module for allowing communication between the sensors and mobile phones/Infotainment systems of the vehicle. The module runs a dedicated background application that is adapted to monitor the vicinity of the vehicle to detect other vehicles having a similar system; calculate speed and heading azimuth of each of the other vehicles; and provide alerts to the driver of the vehicle whenever other vehicles having a similar system are in a path of collision with the vehicle, based on the calculation and on the speed of the vehicle.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: August 24, 2021
    Assignee: FORESIGHT AUTOMOTIVE LTD.
    Inventors: Haim Siboni, Dror Elbaz, Roman Shklyar, Elazar Elkin
  • Patent number: 11095955
    Abstract: A method for delivering an interactive video is provided, including delivering a first video clip of the interactive video in a first loop, and upon receiving a first input during delivery of the first video clip, delivering a first exit sequence of the interactive video, the first exit sequence including a first exit video clip.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: August 17, 2021
    Assignee: FLAVOURWORKS LTD
    Inventors: Pavle Mihajlovic, Jack Attridge
  • Patent number: 11094137
    Abstract: The disclosed subject matter is directed to employing machine learning models configured to predict 3D data from 2D images using deep learning techniques to derive 3D data for the 2D images. In some embodiments, a system is described comprising a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component configured to receive two-dimensional images, and a three-dimensional data derivation component configured to employ one or more three-dimensional data from two-dimensional data (3D-from-2D) neural network models to derive three-dimensional data for the two-dimensional images.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: August 17, 2021
    Assignee: Matterport, Inc.
    Inventors: David Alan Gausebeck, Matthew Tschudy Bell, Waleed K. Abdulla, Peter Kyuhee Hahn
  • Patent number: 11094113
    Abstract: A system for modeling a roof structure comprising an aerial imagery database and a processor in communication with the aerial imagery database. The aerial imagery database stores a plurality of stereoscopic image pairs and the processor selects at least one stereoscopic image pair among the plurality of stereoscopic image pairs and related metadata from the aerial imagery database based on a geospatial region of interest. The processor identifies a target image and a reference image from the at least one stereoscopic pair and calculates a disparity value for each pixel of the identified target image to generate a disparity map. The processor generates a three dimensional point cloud based on the disparity map, the identified target image and the identified reference image. The processor optionally generates a texture map indicative of a three-dimensional representation of the roof structure based on the generated three dimensional point cloud.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: August 17, 2021
    Inventors: Joseph L. Mundy, Bryce Zachary Porter, Ryan Mark Justus, Francisco Rivas
  • Patent number: 11087543
    Abstract: A crowd-sourced modeling system to perform operations that include: receiving image data that comprises image attributes; accessing a 3D model based on at least the image attributes of the image data, wherein the 3D model comprises a plurality of parts that collectively depict an object or environment; identifying a change in the object or environment based on a comparison of the image data with the plurality of parts of the 3D model, the change corresponding to a part of the 3D model from among the plurality of parts; and generating an update to the part of the 3D model based on the image attributes of the image data.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: August 10, 2021
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, Isac Andreas Müller Sandvik, Qi Pan, David Li
  • Patent number: 11089144
    Abstract: Head-mounted display systems and methods of operation that allow users to couple and decouple a portable electronic device such as a handheld portable electronic device with a separate head-mounted device (e.g., temporarily integrates the separate devices into a single unit) are disclosed. The portable electronic may be physically coupled to the head-mounted device such that the portable electronic device can be worn on the user's head. The portable electronic device may be operatively coupled to the head-mounted device such that the portable electronic device and head mounted device can communicate and operate with one another. Each device may be allowed to extend its features and/or services to the other device for the purpose of enhancing, increasing and/or eliminating redundant functions between the head-mounted device and the portable electronic device.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: August 10, 2021
    Assignee: Apple Inc.
    Inventor: Quin C. Hoellwarth
  • Patent number: 11080880
    Abstract: This document describes machine vision systems and methods for determining locations of target elements. The described machine vision system captures and uses information gleaned from the captured target elements to determine the locations of these captured target elements.
    Type: Grant
    Filed: November 17, 2017
    Date of Patent: August 3, 2021
    Assignee: MAKER TRADING PTE LTD
    Inventor: Chris Liu
  • Patent number: 11080554
    Abstract: Embodiments provide techniques, including systems and methods, for processing imaging data to identify an installed component. Embodiments include a component identification system that is configured to receive imaging data including an installed component, extract features of the installed component from the imaging data, and search a data store of components for matching reference components that match those features. A relevance score may be determined for each of the reference components based on a similarity between the image and a plurality of reference images in a component model of each of the plurality of reference components. At least one matching reference component may be identified by comparing each relevance score to a threshold relevance score and matching component information may be provided to an end-user for each matching reference component.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: August 3, 2021
    Assignee: LOMA LINDA UNIVERSITY
    Inventors: Montry Suprono, Robert Walter
  • Patent number: 11080526
    Abstract: A method includes classifying low-resolution pixels of a low-resolution satellite image of a geographic area to form an initial classification map and selecting at least one physically-consistent classification map of the low-resolution pixels based on the initial classification map. A water level associated with at least one of the physically-consistent classification maps is then used to identify a set of high-resolution pixels representing a perimeter of water in the geographic area.
    Type: Grant
    Filed: August 14, 2018
    Date of Patent: August 3, 2021
    Assignee: Regents of the University of Minnesota
    Inventors: Ankush Khandelwal, Anuj Karpatne, Vipin Kumar
  • Patent number: 11080876
    Abstract: A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: August 3, 2021
    Assignee: MUJIN, INC.
    Inventors: Russell Islam, Xutao Ye
  • Patent number: 11081142
    Abstract: Exemplary embodiments relate to the creation of a media effect index for group video conversations. Media effect application (e.g., in the form of graphical overlays, filters, sounds, etc.) may be tracked in a timeline during a chat session. The resulting index may be used to create a highlights reel, which may serve as an index into a live show or may be used to determine the best time to insert materials into a recording of the conversation. The index may be used to automatically detect events in the video feed, to allow viewers to skip ahead to exciting moments (e.g., represented by clusters of applications of particular types of media effects), to determine where each participant spoke in a discussion, or to provide a common “watch together” experience while multiple users watch a common video. An analysis of the index may be used for research or consumer testing.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: August 3, 2021
    Assignee: FACEBOOK, INC.
    Inventors: Stephane Taine, Brendan Benjamin Aronoff, Jason Duane Clark
  • Patent number: 11080532
    Abstract: A highlight processing method includes: obtaining a frame sequence that includes frames each having image contents associated with at least one object, wherein object pose estimation is performed upon each frame of the frame sequence to generate an object pose estimation result of each frame, and further includes determining at least one of a start point and an end point of a highlight interval, wherein comparison of object pose estimation results of different frames is involved in determination of at least one of the start point and the end point of the highlight interval.
    Type: Grant
    Filed: August 19, 2019
    Date of Patent: August 3, 2021
    Assignee: MEDIATEK INC.
    Inventors: Shih-Jung Chuang, Yan-Che Chuang, Chun-Nan Li, Yu-Hsuan Huang, Chih-Chung Chiang
  • Patent number: 11082566
    Abstract: A chart for calibrating a system of multiple cameras, the chart comprising: a background; an array of dots contrasting the background, wherein the array of dots are arranged in rows and columns, wherein the array of dots comprise a first dot array, and a second dot array, wherein the first dot array fully occupies a first region of evenly spaced dots with a first dot density, the second dot array fully occupies a second region of evenly spaced dots with a second dot density, and wherein the second region is enclosed within the first region; a group of first markers in the first region, a group of second markers in the second region, and a third marker at the center of the chart, wherein each second marker is closer to the third marker than each first marker.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: August 3, 2021
    Assignee: OmniVision Technologies, Inc.
    Inventors: Shan Xu, Xinting Gao, Bin Chen, Ye Tao, Guansong Liu, Lu Chang
  • Patent number: 11076138
    Abstract: Provided are a projection system, a projection apparatus, and a calibration method for a displayed image thereof.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: July 27, 2021
    Assignee: Coretronic Corporation
    Inventors: Je-Fu Cheng, Lei-Chih Chang
  • Patent number: 11074675
    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: July 27, 2021
    Assignee: Snap Inc.
    Inventors: Chen Cao, Wen Zhang, Menglei Chai, Linjie Luo
  • Patent number: 11074705
    Abstract: Provided is an information processing device including an acquisition unit that acquires a first captured image, a second captured image, and a distance to a subject, and a derivation unit that derives an imaging position distance which is a distance between the first imaging position and the second imaging position, on the basis of a plurality of pixel coordinates for specifying a plurality of pixels of more than three pixels which are present in the same planar region as an emission position irradiated with the directional light beam on the real space and correspond to the position on the real space in each of the first captured image and the second captured image which are acquired by the acquisition unit, emission position coordinates which are derived on the basis of the distance acquired by the acquisition unit, a focal length of an imaging lens, and dimensions of imaging pixels.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: July 27, 2021
    Assignee: FUJIFILM CORPORATION
    Inventor: Tomonori Masuda
  • Patent number: 11073961
    Abstract: Provided is a mobile terminal which allows pieces of furniture to be virtually arranged. A mobile terminal according to one embodiment of the present invention comprises: a wireless communication unit which is capable of communicating with an external server or an external device; a display unit for displaying an execution screen of a certain application; and a control unit, wherein the execution screen at least comprises: a first area for displaying a first image corresponding to a certain area; a second area for displaying information on each of a plurality of pieces of furniture which can virtually be arranged on the first image; and a third area which includes a chat room for exchanging opinions related to the virtual arrangement of the pieces of furniture on the first image, with a user of at least one predetermined external device on which the certain application is installed.
    Type: Grant
    Filed: July 22, 2020
    Date of Patent: July 27, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Euikyeom Kim, Kyungtae Oh, Seungju Choi, Yoonjung Son
  • Patent number: 11069154
    Abstract: Methods and devices for manipulating an image are described. The method comprises receiving image data, the image data including a first image obtained from a first camera and a second image obtained from a second camera, the first camera and the second camera being oriented in a common direction; identifying one or more boundaries of an object in the image data by analyzing the first image and the second image; and displaying a manipulated image based on the image data, wherein the manipulated image includes manipulation of at least a portion of the first image based on boundaries of the object.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: July 20, 2021
    Assignee: BlackBerry Limited
    Inventor: Steven Henry Fyke
  • Patent number: 11068215
    Abstract: A non-transitory computer-readable storage medium storing computer-readable instructions for an information processing apparatus having a display and a user interface is provided. The computer-readable instructions cause the information processing apparatus to control the display to precedingly display a sheet image and a usable condition image, in response to receiving of an editing operation designating a predetermined position in one of the sheet image and the usable condition image, specify a corresponding position in the other of the sheet image and the usable condition image, and control the display to subsequently display the sheet image and the usable condition image edited as instructed by the editing operation or correspondingly to image-editing in the one of the sheet image and the usable condition image containing the predetermined position; and generate imaging data composing the sheet image having been edited and output the generated imaging data externally.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: July 20, 2021
    Assignee: BROTHER KOGYO KABUSHIKI KAISHA
    Inventor: Masayuki Ishibashi
  • Patent number: 11070781
    Abstract: A method for transforming extended video data for display in virtual reality processes digital extended video data for display on a center screen and two auxiliary screens of a real extended video cinema. The method includes accessing, by a computer executing a rendering application, data that defines virtual screens including a center screen and auxiliary screens, wherein tangent lines to each of the auxiliary screens at their respective centers of area intersect with a tangent line to the center screen at its center of area at equal angles in a range of 75 to 105 degrees. The method includes preparing virtual extended video data at least in part by rendering the digital extended video on corresponding ones of the virtual screens; and saving the virtual extended video data in a computer memory. A corresponding playback method and apparatus display the processed data in virtual reality.
    Type: Grant
    Filed: August 2, 2019
    Date of Patent: July 20, 2021
    Assignee: Warner Bros. Entertainment Inc.
    Inventors: Michael Zink, Mercedes Christine Cardenas
  • Patent number: 11069134
    Abstract: Methods, systems, and computer program products for removing unused portions of a 3D mesh representation of an object may include generating a first mesh representation of the object, the first mesh representation including a plurality of polygons, respective ones of the polygons including at least three vertices and at least three edges, wherein respective ones of the plurality of polygons are associated with a precision value that indicates an extent to which the respective ones of the plurality of polygons in the first mesh representation match the object, and adjusting the first mesh representation of the object to create a second mesh representation of the object by removing, front the first mesh representation, polygons of the plurality of polygons that are associated with precision values that have not been modified from an initial precision value.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: July 20, 2021
    Assignee: Sony Group Corporation
    Inventors: Pal Szasz, Johannes Elg, Fredrik Olofsson, Lars Novak, Fredrik Mattisson
  • Patent number: 11068754
    Abstract: Systems and methods are described for generating an image-based prediction model, where a computing device may obtain a set of 3D images from a 3D image data source. Each of the 3D images can have 3D point cloud data and a Distification technique can be applied to the 3D point cloud data of each 3D image to generate output feature vector(s). The output feature vector(s) may then be used to train and generate the image-based prediction model.
    Type: Grant
    Filed: April 2, 2019
    Date of Patent: July 20, 2021
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Elizabeth Flowers, Puneit Dua, Eric Balota, Shanna L. Phillips
  • Patent number: 11067695
    Abstract: A method for 3D printing an object, based on a 3D printable model of the object, includes scanning, by a first LiDAR sensor of a plurality of LiDAR sensors, a portion of the object while the object is being printed by a printer head. The method also includes generating an image of at least the portion of the object based on scanning the portion, generating a comparison by comparing the image with the 3D printable model, and sending a feedback signal that adjusts the printer head based on the comparison.
    Type: Grant
    Filed: March 16, 2018
    Date of Patent: July 20, 2021
    Assignee: Konica Minolta Laboratory U.S.A., Inc.
    Inventor: Jun Amano
  • Patent number: 11061687
    Abstract: There is provided a program generating apparatus including a generating unit and a genetic processing unit. The generating unit is configured to generate tree structures each representing an image classification program. Each of the tree structures has a first level group and a second level group. Elements of nodes in the first level group are selected from amongst image filters each used to apply preprocessing to an input image. An element of a node in the second level group is selected from amongst setting programs each used to set a different value as a control parameter for generating a classifier based on information obtained by execution of the elements selected for the nodes in the first level group. The genetic processing unit is configured to output, using genetic programming, a tree structure with a fitness score exceeding a predetermined threshold based on the tree structures.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: July 13, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Hiroaki Okamoto, Tsuyoshi Nagato, Tetsuo Koezuka
  • Patent number: 11064193
    Abstract: A method for decoding a data stream representative of an image sequence. At least one current block of a current image in the image sequence is encoded using a predictor block of a reference image, the predictor block being identified in the reference image via location information. An information item enabling the reference image to be identified from a set of reference images is obtained. When the reference image satisfies a predetermined criterion, the location information of the predictor block is decoded using a first decoding mode, otherwise the location information of the predictor block is decoded using a second decoding mode, the first and second decoding modes including at least a different decoding parameter. The current block is then reconstructed from the predictor block.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: July 13, 2021
    Assignee: ORANGE
    Inventors: Felix Henry, Joel Jung, Bappaditya Ray
  • Patent number: 11060853
    Abstract: A three-dimensional sensor system includes three cameras, a projector, and a processor. The projector simultaneously projects at least two linear patterns on the surface of an object. The three cameras synchronously capture a first two-dimensional (2D) image, a second 2D image, and a third 2D image of the object, respectively. The processor extracts a first set and a second set of 2D lines from the at least two linear patterns on the first 2D image and the second 2D image, respectively; generates a candidate set of three-dimensional (3D) points from the first set and the second set of 2D lines; and selects, from the candidate set of 3D points, an authentic set of 3D points that matches a projection contour line of the object surface by: performing data verification on the candidate set of 3D points using the third 2D image, and filtering the candidate set of 3D points.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: July 13, 2021
    Assignee: ScanTech (Hangzhou) Co., Ltd.
    Inventors: Jun Zheng, Shangjian Chen
  • Patent number: 11062209
    Abstract: A method for training a neural network includes receiving a plurality of images and, for each individual image of the plurality of images, generating a training triplet including a subset of the individual image, a subset of a transformed image, and a homography based on the subset of the individual image and the subset of the transformed image. The method also includes, for each individual image, generating, by the neural network, an estimated homography based on the subset of the individual image and the subset of the transformed image, comparing the estimated homography to the homography, and modifying the neural network based on the comparison.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 13, 2021
    Assignee: Magic Leap, Inc.
    Inventors: Daniel DeTone, Tomasz Malisiewicz, Andrew Rabinovich
  • Patent number: 11062518
    Abstract: A method for displaying a mixed reality image, including provision of a display assembly including a camera and a display, acquisition of a first image by the camera according to actual image capturing characteristics, the first image being an image of a scene in the field of view of a user, extraction, from the first image, of actual illumination and position characteristics, selection of virtual elements to be integrated into the first image, modification of the virtual elements according to the actual image capturing, illumination and position characteristics, integration of the modified virtual elements in the first image to obtain a second image, and display of the second image on the display.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: July 13, 2021
    Assignee: STEREOLABS
    Inventors: Cécile Schmollgruber, Edwin Azzam, Olivier Braun, Pierre Yver
  • Patent number: 11062471
    Abstract: Stereo matching generates a disparity map indicating pixels offsets between matched points in a stereo image pair. A neural network may be used to generate disparity maps in real time by matching image features in stereo images using only 2D convolutions. The proposed method is faster than 3D convolution-based methods, with only a slight accuracy loss and higher generalization capability. A 3D efficient cost aggregation volume is generated by combining cost maps for each disparity level. Different disparity levels correspond to different amounts of shift between pixels in the left and right image pair. In general, each disparity level is inversely proportional to a different distance from the viewpoint.
    Type: Grant
    Filed: May 6, 2020
    Date of Patent: July 13, 2021
    Assignee: NVIDIA Corporation
    Inventors: Yiran Zhong, Wonmin Byeon, Charles Loop, Stanley Thomas Birchfield
  • Patent number: 11064096
    Abstract: Video processing, including: generating first tracking information using a first tracking system coupled to a camera which moves during a video sequence forming a shot including multiple frames, wherein the first tracking information includes information about six degrees of freedom motion of the camera synchronized to the multiple frames in the shot; generating second tracking information using a second tracking system coupled to the camera which moves during the video sequence, wherein the second tracking information includes information about six degrees of freedom motion of the camera synchronized to the multiple frames in the shot; generating, by a tracking tool, a timeline with a first track for the first tracking information and a second track for the second tracking information, wherein the tracking tool is coupled to the first tracking system and the second tracking system, and receives the first tracking information and the second tracking information.
    Type: Grant
    Filed: August 11, 2020
    Date of Patent: July 13, 2021
    Assignees: Sony Corporation, Sony Pictures Entertainment, Inc.
    Inventor: Felix Sauermann
  • Patent number: 11051780
    Abstract: An embodiment of a method includes providing a first result list indicating a plurality of first anatomic structures and indicating, for each respective first anatomic structure of the plurality of first anatomic structures, a corresponding first severity indicator; providing a second result list indicating, for each respective second anatomic structure of the plurality of the second anatomic structures, a corresponding second severity indicator; providing a relationship matrix indicating a level of interrelatedness between the first anatomic structures and the second anatomic structures; and generating, based on the first result list provided, on the second result list and on the relationship matrix provided, a concordance visualization indicating a respective level of concordance between at least one of the first anatomic structures and the corresponding first severity indicator, and indicating a respective level of concordance between at least one of the second anatomic structures and the corresponding sec
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: July 6, 2021
    Assignee: SIEMENS HEALTHCARE GMBH
    Inventors: Puneet Sharma, Ulrich Hartung, Chris Schwemmer, Ruth J. Soenius, Dominik Neumann
  • Patent number: 11057603
    Abstract: Provided is a binocular camera depth calibration method, a binocular camera depth calibration device, a binocular camera depth calibration system and a storage medium. The method includes: acquiring a plurality of groups of images of a 2D code calibration board at different positions so as to acquire disparity maps of the 2D code calibration board; acquiring a 2D code template image of a target region; matching the 2D code template image, so as to determine and store matching region information; determining a position of the 2D code template image in each disparity map in accordance with the matching region information, and calculating an average disparity value of the 2D code template image at the position; acquiring a plurality of groups of average disparity values, and calculating a final disparity value; and calibrating a depth of a binocular camera.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: July 6, 2021
    Assignee: Beijing Smarter Eye Technology Co. Ltd.
    Inventors: Qiwei Xie, Xinliang Wang, An Jiang, Yuan Hao, Jian Li