3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 10212409
    Abstract: The present invention provides a depth generation method. The method includes obtaining a left two-dimensional (2D) image and a right two-dimensional image, each having a first image resolution; scaling the left 2D image and the right 2D image to obtain a scaled left 2D image and a scaled right 2D image, each having a second image resolution; and generating an output depth map based on the scaled left 2D image and the scaled right 2D image.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: February 19, 2019
    Assignee: BOE TECHNOLOGY GROUP CO., LTD
    Inventors: Xingxing Zhao, Jibo Zhao
  • Patent number: 10210660
    Abstract: An image processing system is designed to generate a canvas view that has smooth transition between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To realize smooth transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the smooth transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which has smooth transition between binocular views and monocular views in terms of image shape and color based on the blended images.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: February 19, 2019
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs
  • Patent number: 10210618
    Abstract: Within examples, object image masking is provided. An example method includes receiving a depth mask of an object, projecting the depth mask of the object onto an image of the object in a background so as to generate a depth image of the object in the background, determining portions of the depth image of the object in the background that are representative of the object and that are representative of the background, based on the portions of the depth image of the object in the background that are representative of the object determining a foreground mask of the object, and using the foreground mask of the object to identify portions of the image representative of the object.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: February 19, 2019
    Assignee: Google LLC
    Inventors: James Joseph Kuffner, James Robert Bruce, Ken Conley, Arshan Poursohi
  • Patent number: 10212412
    Abstract: A method of increasing a photographing speed of a photographing device which capture an image through a combination of two or more photographing devices and generate and provide an image by using the captured image, thereby increasing a photographing speed. An RGB image obtaining device and a depth image obtaining device alternately perform photographing to obtain an image. Also, a second depth image and a second RGB image respectively corresponding to a first RGB image and a first depth image which are alternately obtained by performing alternate photographing are synthesized and output, thereby actually increasing a photographing speed by twice.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: February 19, 2019
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ju Yong Chang, Jin Seo Kim, Hee Kwon Kim, Soon Chan Park, Ji Young Park, Kwang Hyun Shim, Moon Wook Ryu, Ho Wook Jang, Hyuk Jeong
  • Patent number: 10204454
    Abstract: Image georegistration method and system. An imaging sensor acquires a sensor-image of a scene. Imaging parameters of the acquired sensor-image are obtained, the imaging parameters including at least the detected 3D position and orientation of the imaging sensor when acquiring the sensor-image, as detected using a location measurement unit. A model-image of the scene is generated from a textured 3D geographic model, the model-image representing a texture-based 2D image of the scene as acquired in the 3D model using the imaging parameters. The sensor-image and the model-image are compared and the discrepancies between them determined. An updated 3D position and orientation of the imaging sensor is determined in accordance with the discrepancies. The updated position and orientation may be used to display supplementary content overlaid on the sensor-image in relation to a selected location on the sensor-image, or for determining the geographic location coordinates of a scene element.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: February 12, 2019
    Assignee: ELBIT SYSTEMS LAND AND C4I LTD.
    Inventors: Benny Goldman, Eli Haham
  • Patent number: 10204400
    Abstract: An imaging unit images an object through an imaging optical system so as to acquire image data. A depth map acquiring unit acquires information relating to a depth distribution of an object as depth map data. The resolution of depth map data is relatively lower than the resolution of image data which has been imaged. A depth map shaping unit references image data of an object so as to conform to the resolution of a depth map, when it performs shaping of the depth map based on image data of the object.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: February 12, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masaaki Matsuoka
  • Patent number: 10204402
    Abstract: Related are a drone-mounted imaging hyperspectral geometric correction method and a system, comprising: collecting position attitude information of a current drone low-precision POS sensor in real time; based on the position attitude information, parsing precise photography center position attitude information of a digital photograph, and generating a DEM of an area covered by the photograph; based on the precise photography center position attitude information, performing correction on position attitude data corresponding to multiple imaging hyperspectral scan lines between photography centers of adjacent digital photographs, and obtaining high-precision linear array position attitude information of the multiple imaging hyperspectral scan lines; based on the high-precision linear array position attitude information and the DEM, establishing a collinearity equation and generating a hyperspectral image.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: February 12, 2019
    Assignee: BEIJING RESEARCH CENTER FOR INFORMATION TECHNOLOGY IN AGRICULTURE
    Inventors: Guijun Yang, Chunjiang Zhao, Haiyang Yu, Xiaodong Yang, Xingang Xu, Xiaohe Gu, Haikuan Feng, Hao Yang, Hua Yan
  • Patent number: 10198633
    Abstract: A solar power measurement method is provided. A method may include determining an azimuth of a reference roof edge relative to an orientation of an aerial image of a structure. The method may include capturing at least one spherical image at at least one determined measurement location proximate the structure. Further, the method may include determining a relative azimuth of the reference roof edge from a downward view of a lower hemisphere of the at least one image. In addition, the method may include determining an orientation of an upper hemisphere of the at least one image based on the azimuth of the reference roof edge, the relative azimuth of the reference roof edge, and a known tilt of a roof edge of the structure. Furthermore, the method may include calculating shading conditions for a time period for known sun positions during the time period based on the orientation of the upper hemisphere of the at least one spherical image.
    Type: Grant
    Filed: October 26, 2016
    Date of Patent: February 5, 2019
    Assignee: Vivint Solar, Inc.
    Inventors: Roger L. Jungerman, Mark Galli, Judd Reed, Willard S. MacDonald
  • Patent number: 10200804
    Abstract: Embodiments of the present invention relate to video content assisted audio object extraction. A method of audio object extraction from channel-based audio content is disclosed. The method comprises extracting at least one video object from video content associated with the channel-based audio content, and determining information about the at least one video object. The method further comprises extracting from the channel-based audio content an audio object to be rendered as an upmixed audio signal based on the determined information. Corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: February 5, 2019
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Lianwu Chen, Xuejing Sun, Lie Lu
  • Patent number: 10200667
    Abstract: A recorder creating an encoded data stream comprising an encoded video stream and an encoded graphics stream, the video stream comprising an encoded 3D (three-dimensional) video object, and the graphics stream comprising at least a first encoded segment and a second encoded segment, the first segment comprising 2D (two-dimensional) graphics data and the second segment comprises a depth map for the 2D graphics data. A graphics decoder decoding the first and second encoded segments to form respective first and second decoded sequences. Outputting the first and second decoded sequences separately to a 3D display unit. The 3D display unit combining the first and second decoded sequences and rendering the combination as a 3D graphics image overlaying a 3D video image simultaneously rendered from a decoded 3D video object decoded from the encoded 3D video object.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: February 5, 2019
    Assignee: KONINKLIJKE PHILIPS N.V.
    Inventors: Darwin He, Li Hong, Philip Steven Newton
  • Patent number: 10194077
    Abstract: The invention relates to a method for operating a camera assembly, in which a first camera and a second camera capture images (36, 42). Respective fields of view of the two cameras overlap at least in a partial region (24). At least in an image (36) captured by the first camera, at least one contamination region (38) including a plurality of pixels is detected within the partial region (24). Thereupon, data values specifying the respective transparency of the pixels in the at least one contamination region (38) are varied with respect to respective reference values of transparency, wherein those reference values increase in the partial region (24) towards an edge (28) of the respective images upon superimposition of the images. Furthermore, the invention relates to a camera assembly.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: January 29, 2019
    Assignee: Connaught Electronics Ltd.
    Inventors: Michael Burke, Patrick Eoghan Denny
  • Patent number: 10192311
    Abstract: A structured light active sensing systems may be configured to transmit and received codewords to generate a depth map by analyzing disparities between the locations of the transmitted and received codewords. To determine the locations of received codewords, an image of the projected codewords is identified, from which one or more codeword boundaries are detected. The codeword boundaries may be detected based upon a particular codeword bit of each codeword. Each detected codeword boundary may be constrained from overlapping with other detected codeword boundaries, such that no pixel of the received image is associated with more than one codeword boundary.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: January 29, 2019
    Assignee: QUALCOMM Incorporated
    Inventors: Yunke Pan, Stephen Michael Verrall
  • Patent number: 10191265
    Abstract: An image generation apparatus includes a plurality of irradiators, and a control circuit. The control circuit performs an operation including generating an in-focus image of an object in each of a plurality of predetermined focal planes, extracting a contour of at least one or more cross sections of the object represented in the plurality of in-focus images, generating at least one or more circumferences based on the contour of the at least one or more cross sections, generating a sphere image in the form of a three-dimensional image of at least one or more spheres, each sphere having one of the circumferences, generating a synthetic image by processing the sphere image such that a cross section appears, and displaying the resultant synthetic image on a display.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: January 29, 2019
    Assignee: Panasonic Intellectual Property Management Co., Ltd.
    Inventors: Yumiko Kato, Taichi Sato, Yoshihide Sawada
  • Patent number: 10192145
    Abstract: A method of providing a set of feature descriptors configured to be used in matching an object in an image of a camera is provided. The method includes: a) providing at least two images of a first object; b) extracting in at least two of the images at least one feature from the respective image, c) providing at least one descriptor for an extracted feature, and storing the descriptors; d) matching descriptors in the first set of descriptors; e) computing a score parameter based on the result of the matching process; f) selecting at least one descriptor based on its score parameter; g) adding the selected descriptor(s) to a second set of descriptors; and h) updating the score parameter of descriptors in the first set based on a selection process and to the result of the matching process.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: January 29, 2019
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Daniel Kurz, Thomas Olszamowski
  • Patent number: 10192328
    Abstract: A method of computing statistical weights for a computed tomography (CT) iterative reconstruction process is provided. The method includes obtaining detector count data from a CT scan of an object; calculating variance data based on the count data and an electronic noise variance; transforming the calculated variance data to obtain statistical weight data; and performing the CT iterative reconstruction process using the statistical weight data and raw projection data to obtain a reconstructed CT image.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: January 29, 2019
    Assignee: Toshiba Medical Systems Corporation
    Inventors: Alexander A. Zamyatin, Daxin Shi, Thomas Labno
  • Patent number: 10187589
    Abstract: A system and method for mixing a scene with a virtual scenario. An image capturing unit is arranged to capture at least one image so as to cover the scene from a first viewpoint. An image representation generation unit is arranged to generate at least one image representation based on the captured image. A game engine unit is arranged to generate a virtual scenario. An image processing unit is arranged to adapt the at least one image representation based on the generated virtual scenario so as to provide a virtual video sequence.
    Type: Grant
    Filed: December 19, 2008
    Date of Patent: January 22, 2019
    Assignee: SAAB AB
    Inventors: Måns Hagström, Ulf Erlandsson, Johan Borg, Folke Isaksson, Ingmar Andersson, Adam Tengblad
  • Patent number: 10183398
    Abstract: A point cloud system having two separate sets of points, each of these sets having different points of view, creating data with potentially occluded points in the point cloud. An accelerated approach of close sister points is used to determine which occluded points can be removed by looking out from an assumed non-occluded point, then finding the closest point in the next set of points, then looking back into the first set of points, or jumping to the closest not occluded point and looking back, and if this second sister is close to initial point, it is a close sister.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: January 22, 2019
    Assignee: SKUR, Inc.
    Inventors: Adam Cohen, James Creasy, Alan Gushurst
  • Patent number: 10187378
    Abstract: An authentication server 2 stores, for each of one or more objects to be registered, unique pattern information of a surface of the object to be registered and a personal identification number into a database in association with each other, acquires unique pattern information of a surface of an object to be authenticated which is related to an authentication request, and a personal identification number, extracts, from the database, unique pattern information stored in association with the personal identification number related to the authentication request, and determines whether the extracted unique pattern information includes unique pattern information corresponding to the unique pattern information related to the authentication request.
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: January 22, 2019
    Assignee: FUJI XEROX CO., LTD.
    Inventor: Kensuke Ito
  • Patent number: 10183659
    Abstract: A camera is configured to be mounted facing a front of a vehicle. A computer is programmed to receive first and second images from the camera, determine a height of an obstacle located to the front of the vehicle using at least the first and second images, and, based at least in part on the height of the obstacle, send an instruction via a communications bus to a component controller to control a speed of the vehicle.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: January 22, 2019
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventor: Aaron L. Mills
  • Patent number: 10176644
    Abstract: Simulating a 3D audio environment, including receiving a visual representation of an object at a location in a scene, wherein the location represents a point in 3D space, receiving a sound element, and binding the sound element to the visual representation of the object such that a characteristic of the sound element is dynamically modified coincident with a change in location in the scene of the visual representation of the object in 3D space.
    Type: Grant
    Filed: June 7, 2015
    Date of Patent: January 8, 2019
    Assignee: Apple Inc.
    Inventors: Thomas Goossens, Sebastien Metrot
  • Patent number: 10178303
    Abstract: A process is provided for guiding a capture device (e.g., smartphone, tablet, drone, etc.) to capture a series of images of a building. Images are captured as the camera device moves around the building—taking a plurality of images (e.g., video) from multiple angles and distances. Quality of the image may be determined to prevent low quality images from being captured or to provide instructions on how to improve the quality of the image capture. The series of captured images are uploaded to an image processing system to generate a 3D building model that is returned to the user. The returned 3D building model may incorporate scaled measurements of building architectural elements and may include a dataset of measurements for one or more architectural elements such as siding (e.g., aluminum, vinyl, wood, brick and/or paint), windows, doors or roofing.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: January 8, 2019
    Assignee: HOVER INC.
    Inventors: William Castillo, Derek Halliday, Manish Upendran
  • Patent number: 10169674
    Abstract: A vehicle type recognition method based on a laser scanner is provided, the method comprising steps of: detecting that a vehicle to be checked has entered into a recognition area; causing a laser scanner to move relative to the vehicle to be checked; scanning the vehicle to be checked using the laser scanner on a basis of columns, and storing and splicing data of each column obtained by scanning to form a three-dimensional image of the vehicle to be checked, wherein a lateral width value is specified for each single column of data; specifying a height difference threshold; and determining a height difference between the height at the lowest position of the vehicle to be checked in data of column N and the height at the lowest position of the vehicle to be checked in data of specified number of columns preceding and/or succeeding to the column N.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: January 1, 2019
    Assignee: NUCTECH COMPANY LIMITED
    Inventors: Shangmin Sun, Yanwei Xu, Qiang Li, Weifeng Yu, Yu Hu
  • Patent number: 10169843
    Abstract: A processing system selectively renders pixels or blocks of pixels of an image and leaves some pixels or blocks of pixels unrendered to conserve resources. The processing system generates a motion vector field to identify regions of an image having moving areas. The processing system uses a rendering processor to identify as regions of interest those units having little to no motion, based on the motion vector field, and a large amount of edge activity, and to minimize the probability of unrendered pixels, or “holes”, in these regions. To avoid noticeable patterns, the rendering processor applies a probability map to determine the possible locations of holes, assigning to each unit a probability indicating the percentage of pixels within the unit that will be holes, and assigning a lower probability to units identified as regions of interest.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: January 1, 2019
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Ihab Amer, Guennadi Riguer, Ruijin Wu, Skyler J. Saleh, Boris Ivanovic, Gabor Sines
  • Patent number: 10166923
    Abstract: An image generation device for referencing a correspondence relationship and generating a line-of-sight-converted image from a captured image of an in-vehicle camera mounted to a vehicle is provided. The image generation device includes a first region updating unit that, upon sensing deviation of at least one of the mounting position and the mounting angle of the in-vehicle camera and calculating a new mounting position and mounting angle, updates a correspondence relationship of a predetermined first region in the line-of-sight-converted image in accordance with the new mounting position and mounting angle, and a second region updating unit that, upon satisfaction of a predetermined updating condition after updating the correspondence relationship of the first region, updates the correspondence relationship for a second region in the line-of-sight-converted image in accordance with the new mounting position and mounting angle.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: January 1, 2019
    Assignee: DENSO CORPORATION
    Inventors: Hitoshi Tanaka, Youji Morishita, Muneaki Matsumoto
  • Patent number: 10165258
    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: December 25, 2018
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
  • Patent number: 10165262
    Abstract: An image processing device (10) comprises the following: an image acquisition unit (1) for acquiring an image in which markers for calibration are captured; an edge detection unit (2) for detecting the edges of the markers in the image; a polygon generating unit (3) for estimating a plurality of straight lines on the basis of the edges and generating a virtual polygon region that is surrounded by the plurality of straight lines, the generation being carried out in the image in a region thereof including regions other than those where markers are installed; and a camera parameter calculation unit (4) for calculating camera parameters on the basis of the characteristic amount, with respect to the image, of the virtual polygon region and the characteristic amount, with respect to real space, of the virtual polygon region.
    Type: Grant
    Filed: May 19, 2014
    Date of Patent: December 25, 2018
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Koji Arata, Wataru Nakai, Yuko Arai
  • Patent number: 10163000
    Abstract: A method and corresponding apparatus include extracting a movement trajectory feature of an object from an input video. The method and corresponding apparatus also include coding the extracted movement trajectory feature, and determining a type of a movement of the object based on the coded movement trajectory feature.
    Type: Grant
    Filed: January 14, 2016
    Date of Patent: December 25, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kuanhong Xu, Ya Lu, Zhilan Hu, Hongwei Zhang, Jae-Joon Han, Wonjun Kim
  • Patent number: 10165168
    Abstract: Ambiguous portions of an image which have fewer photons of a reflected light signal detected than required to determine depth can be classified as being dark (i.e., reflecting too few photons to derive depth) and/or far (i.e., beyond a range of a camera) based at least in part on expected depth and reflectivity values. Expected depth and reflectivity values for the ambiguous portions of the image may be determined by analyzing a model of an environment created by previously obtained images and depth and reflectivity values. The expected depth and reflectivity values may be compared to calibrated values for a depth sensing system to classify the ambiguous portions of the image as either dark or far based on the actual photon count detected for the image.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: December 25, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael John Schoenberg, Michael Bleyer, Christopher S. Messer, Denis Demandolx
  • Patent number: 10163220
    Abstract: Described is a system for compensating for ego-motion during video processing. The system generates an initial estimate of camera ego-motion of a moving camera for consecutive image frame pairs of a video of a scene using a projected correlation method, the camera configured to capture the video from a moving platform. An optimal estimation of camera ego-motion is generated using the initial estimate as an input to a valley search method or an alternate line search method. All independent moving objects are detected in the scene using the described hybrid method at superior performance compared to existing methods while saving computational cost.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: December 25, 2018
    Assignee: HRL Laboratories, LLC
    Inventors: Yongqiang Cao, Narayan Srinivasa, Shankar R. Rao
  • Patent number: 10157503
    Abstract: Systems and methods are disclosed for recommending products or services by receiving a three-dimensional (3D) model of one or more products; performing motion tracking and understanding an environment with points or planes using accelerometer sensor and estimating light or color in the environment using one video camera without a depth sensor in a mobile phone; acquiring sensor data from sensors and optimizing features extracted from each image and sensor data, where a feature conveys data unique to the image at a specific pixel location; and projecting the product in the environment.
    Type: Grant
    Filed: May 5, 2018
    Date of Patent: December 18, 2018
    Inventor: Bao Tran
  • Patent number: 10156909
    Abstract: Provided are a gesture recognition device, a gesture recognition method and an information processing device for making it possible to quickly recognize a gesture of a user. The gesture recognition device includes a motion information generator that generates body part motion information by performing detection and tracking of the body part, a prediction processor that makes a first comparison of comparing the generated body part motion information with previously stored pre-gesture motion model information and generates a prediction result regarding a pre-gesture motion on the basis of a result of the first comparison, and a recognition processor that makes a second comparison of comparing the generated body part motion information with previously stored gesture model information and generates a result of recognition of the gesture represented by a motion of the detected body part on the basis of the prediction result and a result of the second comparison.
    Type: Grant
    Filed: April 15, 2016
    Date of Patent: December 18, 2018
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Masashi Kamiya, Yudai Nakamura
  • Patent number: 10157474
    Abstract: A 3D recording device (1) is provided including an image recording device (2), a distance measuring device (3), and an output unit (5). An image processing device (4) is used to calculate, for a sequence (8) of images (9, 10) recorded in different poses (15, 16, 17), groups (29, 30, 31, 32) of image elements (18, 19, 20, 21, 25, 26, 27, 28) corresponding to each other and to determine for each group (29, 30, 31, 32) a three-dimensional position indication (48, 49, 50) and to scale the three-dimensional position indication (48, 49, 50) with the aid of distance information (42) measured by the distance measuring device (3).
    Type: Grant
    Filed: June 2, 2014
    Date of Patent: December 18, 2018
    Assignee: Testo AG
    Inventors: Jan-Friso Evers-Senne, Martin Stratmann, Patrick Zahn
  • Patent number: 10152803
    Abstract: A method of estimating a disparity in a multiview image display apparatus includes performing image scaling on an image frame based on a resolution corresponding to the image frame; determining at least one from among a search range and precision of a matching block for the scaled image frame according to the resolution; and estimating a disparity of the image frame by using the at least one from among the search range and the precision of the matching block.
    Type: Grant
    Filed: June 19, 2015
    Date of Patent: December 11, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Aron Baik
  • Patent number: 10145703
    Abstract: Provided herein is a control method of an electronic apparatus. The control method of an electronic apparatus includes: determining a position of a vehicle that is being operated; detecting information of a guidance point positioned in front of the determined position of the vehicle by a predetermined distance using a map data; generating an object indicating the guidance point using the information of the guidance point; and outputting the generated object through augmented reality.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: December 4, 2018
    Assignee: THINKWARE CORPORATION
    Inventors: Ho Hyung Cho, Suk Pil Ko
  • Patent number: 10145170
    Abstract: A system for reducing sunlight shining into a vehicle includes a window having an array of liquid crystals switchable between a transparent state and a shaded state. The system also includes an eye position sensor for detecting a location of eyes of a driver and an inertial measurement unit (IMU) for detecting a current heading of the vehicle. The system also includes an electronic control unit (ECU) that may determine a current location of the sun relative to the vehicle based on the current heading of the vehicle and a current time of day. The ECU may also select an area of the window to be shaded in order to reduce an amount of sunlight reaching the eyes of the driver and control liquid crystals within the selected area of the window to switch from the transparent state to the shaded state.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: December 4, 2018
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventors: Yuichi Ochiai, Katsumi Nagata, Akira Sasaki
  • Patent number: 10146043
    Abstract: An image processing device includes: an image acquiring unit configured to acquire a plurality of images of different imaging fields of view on the same subject; an image selector configured to select, from the plurality of images acquired, in which a common region of a predetermined size is set at a common position in the individual images, a plurality of image pairs that are combinations of images in which a subject image in the common region in one image corresponds to a subject image in a region other than the common region in another image; a correction gain calculating unit configured to calculate a correction gain for performing shading correction; and an image correcting unit configured to correct shading produced in a correction-target image, using the correction gain calculated by the correction gain calculating unit.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: December 4, 2018
    Assignee: OLYMPUS CORPORATION
    Inventor: Toshihiko Arai
  • Patent number: 10147105
    Abstract: A system and a process are disclosed to analyze images and predict personality to enhance business outcomes by analyzing colors predominant in images selected, posted, or liked by a person, determining color values for the predominant colors in the images, weighting the color values, and, based on the weighted color values, deriving one or more personality attributes according to a particular psychological orientation.
    Type: Grant
    Filed: October 29, 2016
    Date of Patent: December 4, 2018
    Assignee: DOTIN LLC
    Inventors: Ganesh Iyer, Roman Samarev, Sanjeev Ukhalkar
  • Patent number: 10146298
    Abstract: Enhanced handheld screen-sensing pointing, in which a handheld device captures a camera image of one or more fiducials rendered by a display device, and a position or an angle of the one or more fiducials in the captured camera image is determined. A position on the display device that the handheld device is aimed towards is determined based at least on the determined position or angle of the one or more fiducials in the camera image, and an application is controlled based on the determined position on the display device.
    Type: Grant
    Filed: October 12, 2015
    Date of Patent: December 4, 2018
    Assignee: QUALCOMM Incorporated
    Inventor: Evan Hildreth
  • Patent number: 10140753
    Abstract: A system, apparatus and method of obtaining data from a 2D image in order to determine the 3D shape of objects appearing in said 2D image, said 2D image having distinguishable epipolar lines, said method comprising: (a) providing a predefined set of types of features, giving rise to feature types, each feature type being distinguishable according to a unique bi-dimensional formation; (b) providing a coded light pattern comprising multiple appearances of said feature types; (c) projecting said coded light pattern on said objects such that the distance between epipolar lines associated with substantially identical features is less than the distance between corresponding locations of two neighboring features; (d) capturing a 2D image of said objects having said projected coded light pattern projected thereupon, said 2D image comprising reflected said feature types; and (e) extracting: (i) said reflected feature types according to the unique bi-dimensional formations; and (ii) locations of said reflected feature
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: November 27, 2018
    Assignee: MANTIS VISION LTD.
    Inventors: Eyal Gordon, Gur Arie Bittan
  • Patent number: 10134137
    Abstract: Apparatuses, methods, systems, and program products are disclosed for reducing storage using commonalities. One or more features that are common among each of a plurality of images is determined. One or more background images are generated based on the one or more common features. The one or more background images are used to recreate each of the plurality of images. One or more common features are modified in each image of the plurality of images prior to saving each image. Each of the plurality of images with the modified features is a foreground image.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: November 20, 2018
    Assignee: Lenovo (Singapore) PTE. LTD.
    Inventors: Grigori Zaitsev, Russell Speight VanBlon, Jianbang Zhang
  • Patent number: 10133830
    Abstract: A system and method is provided for scaling and constructing a multi-dimensional (e.g., 3D) building model using ground-level imagery. Ground-level imagery is used to identify architectural elements that have known architectural standard dimensional ratios. Dimensional ratios of architectural elements in the multi-dimensional building model (unscaled) are compared with known architectural standard dimensional ratios to scale and construct an accurate multi-dimensional building model.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: November 20, 2018
    Assignee: HOVER INC.
    Inventors: Vineet Bhatawadekar, Shaohui Sun, Ioannis Pavlidis, Adam J. Altman
  • Patent number: 10127712
    Abstract: A virtual view of a scene may be generated through the use of various systems and methods. In one exemplary method, from a tiled array of cameras, image data may be received. The image data may depict a capture volume comprising a scene volume in which a scene is located. A viewing volume may be defined. A virtual occluder may be positioned at least partially within the capture volume such that a virtual window of the virtual occluder is between the viewing volume and the scene. A virtual viewpoint within the viewing volume may be selected. A virtual view may be generated to depict the scene from the virtual viewpoint.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: November 13, 2018
    Assignee: Google LLC
    Inventor: Trevor Carothers
  • Patent number: 10129455
    Abstract: An auto-focus method including at a same moment, collecting a first image of a first object using a first image shooting unit, collecting a second image of the first object using a second image shooting unit, calculating M pieces of first depth information of M same feature point pairs in corresponding areas in the first image and the second image, determining whether confidence of the M pieces of first depth information is greater than a threshold, obtaining focusing depth information according to N pieces of first depth information in the M pieces of first depth information when the confidence of the M pieces of first depth information is greater than the threshold, obtaining a target position of a first lens of the first image shooting unit according to the focusing depth information, and controlling the first lens to move to the target position.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: November 13, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Cheng Du, Jin Wu, Wei Luo, Bin Deng
  • Patent number: 10119808
    Abstract: Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays that includes at least two two-dimensional arrays of cameras each several cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture. Furthermore, the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture capture a set of images of the scene illuminated with the projected texture; determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: November 6, 2018
    Assignee: FotoNation Limited
    Inventors: Kartik Venkataraman, Jacques Duparré
  • Patent number: 10113694
    Abstract: The present invention relates to a safety photoelectric barrier for monitoring a protective field and to a corresponding method. A safety photoelectric barrier (100) comprises a single-sided transceiver bar with a housing (102), a plurality of transceiver modules (104) each having a radiation emitting unit (112) for emitting radiation towards a reference target (108), a radiation detecting unit (114) for detecting radiation incident on the transceiver module (104), and a signal processing unit for evaluating the detected radiation regarding a distance information and an intensity information and for generating a binary output signal indicating the presence or absence of an object within the protective field. A controller module (126) evaluates the binary output signals and generates a safety signal in response to the evaluated output signals. The radiation detecting unit comprises at least a first and a second photosensitive element (114) for redundantly evaluating the distance and intensity information.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: October 30, 2018
    Assignee: Rockwell Automation Safety AG
    Inventors: Eric Lutz, Carl Meinherz, Martin Hardegger
  • Patent number: 10115182
    Abstract: The present invention discloses a depth map super-resolution processing method, including: firstly, respectively acquiring a first original image (S1) and a second original image (S2) and a low resolution depth map (d) of the first original image (S1); secondly, 1) dividing the low resolution depth map (d) into multiple depth image blocks; 2) respectively performing the following processing on the depth image blocks obtained in step 1); 21) performing super-resolution processing on a current block with multiple super-resolution processing methods, to obtain multiple high resolution depth image blocks; 22) obtaining new synthesized image blocks by using an image synthesis technology; 23) upon matching and judgment, determining an ultimate high resolution depth image block; and 3) integrating the high resolution depth image blocks of the depth image blocks into one image according to positions of the depth image blocks in the low resolution depth map (d).
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: October 30, 2018
    Assignee: GRADUATE SCHOOL AT SHENZHEN, TSINGHUA UNIVERSITY
    Inventors: Lei Zhang, Xiangyang Ji, Yangguang Li, Yongbing Zhang, Xin Jin, Haoqian Wang, Guijin Wang
  • Patent number: 10116922
    Abstract: Disclosed herein are methods, devices, and non-transitory computer readable media that relate to stereoscopic image creation. A camera captures an initial image at an initial position. A target displacement from the initial position is determined for a desired stereoscopic effect, and an instruction is provided that specifies a direction in which to move the camera from the initial position. While the camera is in motion, an estimated displacement from the initial position is calculated. When the estimated displacement corresponds to the target displacement, the camera automatically captures a candidate image. An acceptability analysis is performed to determine whether the candidate image has acceptable image quality and acceptable similarity to the initial image. If the candidate image passes the acceptability analysis, a stereoscopic image is created based on the initial and candidate images.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: October 30, 2018
    Assignee: Google LLC
    Inventors: Jonathan Huang, Samuel Kvaalen, Peter Bradshaw
  • Patent number: 10109055
    Abstract: A machine vision system and method uses captured depth data to improve the identification of a target object in a cluttered scene. A 3D-based object detection and pose estimation (ODPE) process is use to determine pose information of the target object. The system uses three different segmentation processes in sequence, where each subsequent segmentation process produces larger segments, in order to produce a plurality of segment hypotheses, each of which is expected to contain a large portion of the target object in the cluttered scene. Each segmentation hypotheses is used to mask 3D point clouds of the captured depth data, and each masked region is individually submitted to the 3D-based ODPE.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: October 23, 2018
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Liwen Xu, Joseph Chi Tai Lam, Alex Levinshtein
  • Patent number: 10104359
    Abstract: A disparity value deriving device includes a calculator configured to calculate costs of candidates for a corresponding region in a comparison image that corresponds to a reference region in a reference image, based on luminance values of the regions. The device also includes a changer configured to change a cost exceeding a threshold to a value higher than the threshold; a synthesizer configured to synthesize a cost of a candidate for a corresponding region for one reference region after the change and a cost of a candidate for a corresponding region for another reference region after the change; and a deriving unit configured to derive a disparity value based on a position of the one reference region and a position of the corresponding region in the comparison image for which the cost after the synthesis is smallest.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: October 16, 2018
    Assignee: RICOH COMPANY, LIMITED
    Inventors: Kiichiroh Saitoh, Yoshikazu Watanabe, Soichiro Yokota, Ryohsuke Tamura, Wei Zhong
  • Patent number: 10102761
    Abstract: A route prediction unit estimates a route of an object of interest with respect to a target object based on collision avoidance models. A collision risk estimation unit calculates collision risks between the object of interest and target object for each collision avoidance model. A collision deciding unit decides the presence or absence of a collision from the collision risks and feeds back a collision avoidance model correction value to the route prediction unit when it is determined that the collision occurs. A collision avoidance route selector selects any of the plurality of collision avoidance models in which the absence of collision is decided by the collision deciding unit, and selects a route of the collision avoidance model as a route for avoiding the collision between the objects. The route prediction unit performs a new route prediction using the collision avoidance model correction value.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: October 16, 2018
    Assignee: Mitsubishi Electric Corporation
    Inventors: Yuki Takabayashi, Hiroshi Kameda