Patents Examined by Daniel C Chang
  • Patent number: 11308576
    Abstract: In accordance with implementations of the subject matter described herein, there is proposed a solution of visual stylization of stereoscopic images. In the solution, a first feature map for a first source image and a second feature map for a second source image are extracted. The first and second source images correspond to first and second views of a stereoscopic image, respectively. A first unidirectional disparity from the first source image to the second source image is determined based on the first and second source images. First and second target images having a specified visual style are generated by processing the first and second feature maps based on the first unidirectional disparity. Through the solution, a disparity between two source images of a stereoscopic image are taken into account when performing the visual style transfer, thereby maintaining a stereoscopic effect in the stereoscopic image consisting of the target images.
    Type: Grant
    Filed: January 8, 2019
    Date of Patent: April 19, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Lu Yuan, Gang Hua, Jing Liao, Dongdong Chen
  • Patent number: 11301981
    Abstract: There are provided a method of vehicle inspection and a system thereof, the method comprising: obtaining a plurality of sets of images capturing a plurality of segments of surface of a vehicle at a plurality of time points; generating, for each time point, a 3D patch using a set of images capturing a corresponding segment at the time point, giving rise to a plurality of 3D patches; estimating 3D transformation of the plurality of 3D patches based on a relative movement between the imaging devices and the vehicle; and registering the plurality of 3D patches using the estimated 3D transformation thereby giving rise to a composite 3D point cloud of the vehicle. The composite 3D point cloud is usable for reconstructing a 3D mesh and/or 3D model of the vehicle where light reflection, comprised in at least some of the plurality of sets of images, is eliminated therefrom.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: April 12, 2022
    Assignee: UVEYE LTD.
    Inventors: Amir Hever, Dvir Paravi, Ilya Grinshpoun, Ohad Hever
  • Patent number: 11301968
    Abstract: Captured images of a scene may include depictions of objects moving within the scene. The portions of the images depicting the moving objects may be identified by aligning the images and analyzing the changes in pixel values of the aligned images. For the portion of the images depicting the moving objects, the pixels values may be replaced with mean, mode, and/or median values that approximate the value that would have been captured without the moving objects, and one or more image without the depiction of moving objects may be generated.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: April 12, 2022
    Assignee: GoPro, Inc.
    Inventors: Marc Lebrun, Maxim Karpushin, Nicolas Rahmouni
  • Patent number: 11282164
    Abstract: Systems and methods of video inpainting for autonomous driving are disclosed. For example, the method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth frames have previously been removed. The method further projects the 3D map onto a first image frame to generate a corresponding depth map, where the first image frame includes a target inpainting region. For each target pixel within the target inpainting region of the first image frame, based on the corresponding depth map, the method further maps the target pixel within the target inpainting region of the first image frame to a candidate pixel in a second image frame. The method further determines a candidate color to fill the target pixel. The method further performs Poisson image editing on the first image frame to achieve color consistency at a boundary and between inside and outside of the target inpainting region of the first image frame.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 22, 2022
    Assignees: BAIDU USA LLC, BAIDU.COM TIMES TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Ruigang Yang
  • Patent number: 11232540
    Abstract: With respect to two images acquired from two video images including a mutually-overlapping area in which the two video images overlap each other, an image transformation matrix for mapping coordinate systems is sequentially generated. Coordinate transformation of at least one image of the images is performed using the generated image transformation matrix. A composite image is created by overlaying two images with the at least one image subjected to the coordinate transformation. The currently used image transformation matrix and the newly generated image transformation matrix are compared with each other. If the two image transformation matrices are similar, the coordinate transformation is performed using the currently used image transformation matrix continuously. If the two image transformation matrices are dissimilar, the coordinate transformation is performed using an image transformation matrix corrected with the newly generated image transformation matrix.
    Type: Grant
    Filed: September 11, 2020
    Date of Patent: January 25, 2022
    Assignee: MURAKAMI CORPORATION
    Inventor: Atsushi Hayami
  • Patent number: 11227402
    Abstract: A velocity measuring device includes an event sensor, a ranging sensor and a controller. The event sensor could detect a first image frame of an object along a plane at a first time point and detect a second image frame of the object at a second time point. The ranging sensor could detect a first depth of the object along a depth direction at the first time point, wherein the depth direction is substantially perpendicular to the plane and detect a second depth of the object along the depth direction at the second time point. The controller could obtain first-dimensional velocity and a second-dimensional velocity along the plane according to the first image frame, the second image frame, the first depth and the second depth, and obtain a third-dimensional velocity along the depth direction according to the first depth or the second depth.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: January 18, 2022
    Assignee: iCatch Technology, Inc.
    Inventor: Jian-An Chen
  • Patent number: 11222433
    Abstract: Provided are a method and apparatus for calculating three-dimensional coordinates using photographic images, and more particularly, a method and apparatus for calculating three-dimensional coordinates using photographic images in which a plurality of photographic images are analyzed to calculate the three-dimensional coordinate of a point commonly marked on the photographic images. By using the method and apparatus for calculating three-dimensional coordinates using photographic images captured by a camera, three-dimensional coordinates of arbitrary points marked on the photographic images can be easily calculated.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: January 11, 2022
    Assignee: CUPIX, INC.
    Inventors: SeockHoon Bae, Jun Young Park
  • Patent number: 11217100
    Abstract: Provided are an electronic device and method for assisting with driving of a vehicle, the electronic device including: a sensing unit configured to sense a driving state of at least one external vehicle on an entry scheduled lane that the vehicle is to enter by changing lanes; a processor configured to determine an entry possible region on the entry scheduled lane based on the sensed driving state of the at least one external vehicle and determine an entry condition for entry of the vehicle into the entry possible region; and an outputter configured to output information about the entry possible region and the entry condition.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: January 4, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jin-woo Yoo, Jung-gap Kuk, A-ron Baik, Min-sung Jang, Jung-un Lee
  • Patent number: 11210796
    Abstract: An imaging method includes determining a target position in an imaging frame of an image capturing mechanism connected with a positioning mechanism, outputting a capturing instruction to control the image capturing mechanism to capture a scene including a human face after the target position is determined, detecting the human face from the captured scene, and outputting a control instruction to position the detected human face at the target position by moving the positioning mechanism to move the image capturing mechanism.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: December 28, 2021
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Xuyang Feng, Cong Zhao, Junfeng Yu, Jie Qian
  • Patent number: 11205252
    Abstract: A method and a device for enhancing brightness and contrast of a video image are provided. In the method, an inflection point, a truncation point, a maximum value of a brightness component, and a minimum value of the brightness component of an image frame to be processed are determined based on the brightness component of the image frame to be processed, a piecewise linear function is determined based on the inflection point, and brightness and contrast enhancement processing is performed on the image frame to be processed based on the piecewise linear function. Compared with the brightness and contrast enhancement method in conventional art, the method and device of the application can achieve better brightness and contrast enhancement effects.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: December 21, 2021
    Assignee: SHENZHEN LONTIUM SEMICONDUCTOR TECHNOLOGY CO., LTD.
    Inventors: Xu Sun, Jixing Ye, Wenhan Yin, Ligang Hu, Shiyong Liang, Changfang Yue, Rongliang Yu
  • Patent number: 11200691
    Abstract: Systems and methods for optical sensing, visualization and detection in media (e.g., turbid media; turbid water; fog; non-turbid media). A light source and an image sensor are positioned in turbid media or external to the turbid media with the light source within a field of view of the image sensor array. Temporal optical signals are transmitted through the turbid media via the light source and multiple perspective video sequence frames are acquired via the image sensor array of light propagating through the turbid media. A three-dimensional image is reconstructed from each frame and the reconstructed three-dimensional images are combined to form a three-dimensional video sequence. The transmitted optical signals are detected from the three-dimensional video sequence by applying a multi-dimensional signal detection scheme.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: December 14, 2021
    Assignee: UNIVERSITY OF CONNECTICUT
    Inventors: Bahram Javidi, Satoru Komatsu, Adam Markman
  • Patent number: 11182905
    Abstract: Introduced here are computer programs and associated computer-implemented techniques for finding the correspondence between sets of graphical elements that share a similar structure. In contrast to conventional approaches, this approach can leverage the similar structure to discover how two sets of graphical elements are related to one another without the relationship needing to be explicitly specified. To accomplish this, a graphics editing platform can employ one or more algorithms designed to encode the structure of graphical elements using a directed graph and then compute element-to-element correspondence between different sets of graphical elements that share a similar structure.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: November 23, 2021
    Assignee: ADOBE INC.
    Inventors: Hijung Shin, Holger Winnemoeller, Wilmot Li
  • Patent number: 11176378
    Abstract: An image receiving unit receives an input of an image set owned by a first user, an image analyzing unit analyzes each image included in the image set, a tag information setting unit sets tag information items to be assigned to each image based on an analyzing result of each image, and a tag information assigning unit assigns, as main tag information, the tag information, among the tag information items to be assigned to the image, for which a ratio of the number of times of appearances of the tag information to be assigned to the image to the total number of times of appearances of all the tag information items to be assigned to all the images included in the image set is equal to or greater than a first threshold value and is equal to or less than a second threshold value, to the image, for each image.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: November 16, 2021
    Assignee: FUJIFILM Corporation
    Inventor: Masaya Usuki
  • Patent number: 11170521
    Abstract: In an exemplary process for determining a position of an object in a computer-generated reality environment using an eye gaze, a user uses their eyes to interact with user interface objects displayed on an electronic device. A first direction of gaze is determined for a first eye of a user detected via the one or more cameras, and a second direction of gaze is determined for a second eye of the user detected via the one or more cameras. A convergence point of the first and second directions of gaze is determined, and a distance between a position of the user and a position of an object in the computer-generated reality environment is determined based on the convergence point. A task is performed based on the determined distance between the position of the user and the position of the object in the computer-generated reality environment.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: November 9, 2021
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Anselm Grundhöfer
  • Patent number: 11151729
    Abstract: Improvement in the accuracy of estimating the position of a mobile entity even while traveling or if there is an error in the calibration performed utilizing: a mobile entity; an imaging device provided in the mobile entity; and an information processing device for determining a first movement amount by which a detection point that is the same object has moved on the basis of a first image and a second image acquired by the imaging device and a second movement amount by which the mobile entity has moved while the first image and the second image were acquired, determining the accuracy of recognizing the detection point acquired by the imaging device on the basis of the first movement amount and the second movement amount, and estimating the position of the mobile entity on the basis of the accuracy of recognition and position information that pertains to the detection point.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: October 19, 2021
    Assignee: HITACHI AUTOMOTIVE SYSTEMS, LTD.
    Inventors: Alex Masuo Kaneko, Kenjiro Yamamoto, Shinya Ootsuji, Shigenori Hayase
  • Patent number: 11151735
    Abstract: A deformation processing support system acquires target shape data of a work having a reference line; acquires intermediate shape data from the work in an intermediate shape having a reference line marked thereon; and overlaps the two data on each other by aligning the reference lines relative to each other, to calculate a necessary deformation amount of the work based on a difference between the two data overlapped on each other. To align the reference lines with each other, first and second alignment axes with the same length calculated for the respective reference lines are superimposed on each other. Subsequently, the intermediate shape data is relatively rotated with respect to the target shape data around the first alignment axis.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: October 19, 2021
    Assignee: KAWASAKI JUKOGYO KABUSHIKI KAISHA
    Inventors: Atsuki Nakagawa, Shinichi Nakano, Naohiro Nakamura
  • Patent number: 11113833
    Abstract: An object detection system includes a depth image detector and a moving object extractor. The depth image detector detects a depth image from an external environment. The moving object extractor extracts a moving object desired to be extracted from the depth image. The moving object extractor registers in advance the depth image in a memory as a background while the moving object to be extracted does not exist, and extracts only a pixel whose current depth is present on a nearer side than a depth of the background as a candidate for a pixel corresponding to the moving object to be extracted.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: September 7, 2021
    Assignee: KONICA MINOLTA, INC.
    Inventor: Shunsuke Takamura
  • Patent number: 11113851
    Abstract: A method for correcting image data from a differential phase contrast imaging system is provided. Data comprising distorted data due to spatial variation is obtained. The data is corrected by correcting the distorted data.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: September 7, 2021
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Ching-wei Chang, Lambertus Hesselink
  • Patent number: 11107226
    Abstract: A system includes sensors and a tracking subsystem. The subsystem tracks first and second objects in a space. Following a collision event between the first and second object, a top-view image of the first object is received from a first sensor. Based on the top-view image, a first descriptor is determined for the first object. The first descriptor is associated with an observable characteristic of the first object. If criteria are not satisfied for distinguishing the first object from the second object based on the first descriptor, a third descriptor is determined for the first object. The third descriptor is generated by an artificial neural network configured to identify objects in top-view images. The tracking subsystem uses the third descriptor to assign an identifier to the first object.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: August 31, 2021
    Assignee: 7-ELEVEN, INC.
    Inventors: Shahmeer Ali Mirza, Sailesh Bharathwaaj Krishnamurthy, Madan Mohan Chinnam, Crystal Maung
  • Patent number: 11100671
    Abstract: An image generation apparatus includes an image acquisition part that acquires a wide angle image group and a telephoto image group in which a subject is imaged while changing a position of an imaging apparatus, the wide angle image group being captured by the imaging apparatus including an imaging optical system including a wide angle optical system and a telephoto optical system having a common optical axis, and the telephoto image group being captured at the same time as the wide angle image group; a composition information acquisition part that analyzes the acquired wide angle image group and acquires composition information to be used for compositing the telephoto image group; and a composite image generation part that generates an image in which the telephoto image group is composited, information related to focal lengths of the wide angle optical system and the telephoto optical system, and the telephoto image group.
    Type: Grant
    Filed: October 16, 2019
    Date of Patent: August 24, 2021
    Assignee: FUJIFILM Corporation
    Inventor: Shuji Ono