3-d Or Stereo Imaging Analysis Patents (Class 382/154)
  • Patent number: 10146043
    Abstract: An image processing device includes: an image acquiring unit configured to acquire a plurality of images of different imaging fields of view on the same subject; an image selector configured to select, from the plurality of images acquired, in which a common region of a predetermined size is set at a common position in the individual images, a plurality of image pairs that are combinations of images in which a subject image in the common region in one image corresponds to a subject image in a region other than the common region in another image; a correction gain calculating unit configured to calculate a correction gain for performing shading correction; and an image correcting unit configured to correct shading produced in a correction-target image, using the correction gain calculated by the correction gain calculating unit.
    Type: Grant
    Filed: January 12, 2016
    Date of Patent: December 4, 2018
    Assignee: OLYMPUS CORPORATION
    Inventor: Toshihiko Arai
  • Patent number: 10147105
    Abstract: A system and a process are disclosed to analyze images and predict personality to enhance business outcomes by analyzing colors predominant in images selected, posted, or liked by a person, determining color values for the predominant colors in the images, weighting the color values, and, based on the weighted color values, deriving one or more personality attributes according to a particular psychological orientation.
    Type: Grant
    Filed: October 29, 2016
    Date of Patent: December 4, 2018
    Assignee: DOTIN LLC
    Inventors: Ganesh Iyer, Roman Samarev, Sanjeev Ukhalkar
  • Patent number: 10146298
    Abstract: Enhanced handheld screen-sensing pointing, in which a handheld device captures a camera image of one or more fiducials rendered by a display device, and a position or an angle of the one or more fiducials in the captured camera image is determined. A position on the display device that the handheld device is aimed towards is determined based at least on the determined position or angle of the one or more fiducials in the camera image, and an application is controlled based on the determined position on the display device.
    Type: Grant
    Filed: October 12, 2015
    Date of Patent: December 4, 2018
    Assignee: QUALCOMM Incorporated
    Inventor: Evan Hildreth
  • Patent number: 10145170
    Abstract: A system for reducing sunlight shining into a vehicle includes a window having an array of liquid crystals switchable between a transparent state and a shaded state. The system also includes an eye position sensor for detecting a location of eyes of a driver and an inertial measurement unit (IMU) for detecting a current heading of the vehicle. The system also includes an electronic control unit (ECU) that may determine a current location of the sun relative to the vehicle based on the current heading of the vehicle and a current time of day. The ECU may also select an area of the window to be shaded in order to reduce an amount of sunlight reaching the eyes of the driver and control liquid crystals within the selected area of the window to switch from the transparent state to the shaded state.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: December 4, 2018
    Assignee: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
    Inventors: Yuichi Ochiai, Katsumi Nagata, Akira Sasaki
  • Patent number: 10145703
    Abstract: Provided herein is a control method of an electronic apparatus. The control method of an electronic apparatus includes: determining a position of a vehicle that is being operated; detecting information of a guidance point positioned in front of the determined position of the vehicle by a predetermined distance using a map data; generating an object indicating the guidance point using the information of the guidance point; and outputting the generated object through augmented reality.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: December 4, 2018
    Assignee: THINKWARE CORPORATION
    Inventors: Ho Hyung Cho, Suk Pil Ko
  • Patent number: 10140753
    Abstract: A system, apparatus and method of obtaining data from a 2D image in order to determine the 3D shape of objects appearing in said 2D image, said 2D image having distinguishable epipolar lines, said method comprising: (a) providing a predefined set of types of features, giving rise to feature types, each feature type being distinguishable according to a unique bi-dimensional formation; (b) providing a coded light pattern comprising multiple appearances of said feature types; (c) projecting said coded light pattern on said objects such that the distance between epipolar lines associated with substantially identical features is less than the distance between corresponding locations of two neighboring features; (d) capturing a 2D image of said objects having said projected coded light pattern projected thereupon, said 2D image comprising reflected said feature types; and (e) extracting: (i) said reflected feature types according to the unique bi-dimensional formations; and (ii) locations of said reflected feature
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: November 27, 2018
    Assignee: MANTIS VISION LTD.
    Inventors: Eyal Gordon, Gur Arie Bittan
  • Patent number: 10133830
    Abstract: A system and method is provided for scaling and constructing a multi-dimensional (e.g., 3D) building model using ground-level imagery. Ground-level imagery is used to identify architectural elements that have known architectural standard dimensional ratios. Dimensional ratios of architectural elements in the multi-dimensional building model (unscaled) are compared with known architectural standard dimensional ratios to scale and construct an accurate multi-dimensional building model.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: November 20, 2018
    Assignee: HOVER INC.
    Inventors: Vineet Bhatawadekar, Shaohui Sun, Ioannis Pavlidis, Adam J. Altman
  • Patent number: 10134137
    Abstract: Apparatuses, methods, systems, and program products are disclosed for reducing storage using commonalities. One or more features that are common among each of a plurality of images is determined. One or more background images are generated based on the one or more common features. The one or more background images are used to recreate each of the plurality of images. One or more common features are modified in each image of the plurality of images prior to saving each image. Each of the plurality of images with the modified features is a foreground image.
    Type: Grant
    Filed: October 27, 2016
    Date of Patent: November 20, 2018
    Assignee: Lenovo (Singapore) PTE. LTD.
    Inventors: Grigori Zaitsev, Russell Speight VanBlon, Jianbang Zhang
  • Patent number: 10127712
    Abstract: A virtual view of a scene may be generated through the use of various systems and methods. In one exemplary method, from a tiled array of cameras, image data may be received. The image data may depict a capture volume comprising a scene volume in which a scene is located. A viewing volume may be defined. A virtual occluder may be positioned at least partially within the capture volume such that a virtual window of the virtual occluder is between the viewing volume and the scene. A virtual viewpoint within the viewing volume may be selected. A virtual view may be generated to depict the scene from the virtual viewpoint.
    Type: Grant
    Filed: July 8, 2016
    Date of Patent: November 13, 2018
    Assignee: Google LLC
    Inventor: Trevor Carothers
  • Patent number: 10129455
    Abstract: An auto-focus method including at a same moment, collecting a first image of a first object using a first image shooting unit, collecting a second image of the first object using a second image shooting unit, calculating M pieces of first depth information of M same feature point pairs in corresponding areas in the first image and the second image, determining whether confidence of the M pieces of first depth information is greater than a threshold, obtaining focusing depth information according to N pieces of first depth information in the M pieces of first depth information when the confidence of the M pieces of first depth information is greater than the threshold, obtaining a target position of a first lens of the first image shooting unit according to the focusing depth information, and controlling the first lens to move to the target position.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: November 13, 2018
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Cheng Du, Jin Wu, Wei Luo, Bin Deng
  • Patent number: 10119808
    Abstract: Systems and methods in accordance with embodiments of the invention estimate depth from projected texture using camera arrays that includes at least two two-dimensional arrays of cameras each several cameras; an illumination system configured to illuminate a scene with a projected texture; a processor; and memory containing an image processing pipeline application and an illumination system controller application. In addition, the illumination system controller application directs the processor to control the illumination system to illuminate a scene with a projected texture. Furthermore, the image processing pipeline application directs the processor to: utilize the illumination system controller application to control the illumination system to illuminate a scene with a projected texture capture a set of images of the scene illuminated with the projected texture; determining depth estimates for pixel locations in an image from a reference viewpoint using at least a subset of the set of images.
    Type: Grant
    Filed: November 18, 2014
    Date of Patent: November 6, 2018
    Assignee: FotoNation Limited
    Inventors: Kartik Venkataraman, Jacques Duparré
  • Patent number: 10115182
    Abstract: The present invention discloses a depth map super-resolution processing method, including: firstly, respectively acquiring a first original image (S1) and a second original image (S2) and a low resolution depth map (d) of the first original image (S1); secondly, 1) dividing the low resolution depth map (d) into multiple depth image blocks; 2) respectively performing the following processing on the depth image blocks obtained in step 1); 21) performing super-resolution processing on a current block with multiple super-resolution processing methods, to obtain multiple high resolution depth image blocks; 22) obtaining new synthesized image blocks by using an image synthesis technology; 23) upon matching and judgment, determining an ultimate high resolution depth image block; and 3) integrating the high resolution depth image blocks of the depth image blocks into one image according to positions of the depth image blocks in the low resolution depth map (d).
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: October 30, 2018
    Assignee: GRADUATE SCHOOL AT SHENZHEN, TSINGHUA UNIVERSITY
    Inventors: Lei Zhang, Xiangyang Ji, Yangguang Li, Yongbing Zhang, Xin Jin, Haoqian Wang, Guijin Wang
  • Patent number: 10113694
    Abstract: The present invention relates to a safety photoelectric barrier for monitoring a protective field and to a corresponding method. A safety photoelectric barrier (100) comprises a single-sided transceiver bar with a housing (102), a plurality of transceiver modules (104) each having a radiation emitting unit (112) for emitting radiation towards a reference target (108), a radiation detecting unit (114) for detecting radiation incident on the transceiver module (104), and a signal processing unit for evaluating the detected radiation regarding a distance information and an intensity information and for generating a binary output signal indicating the presence or absence of an object within the protective field. A controller module (126) evaluates the binary output signals and generates a safety signal in response to the evaluated output signals. The radiation detecting unit comprises at least a first and a second photosensitive element (114) for redundantly evaluating the distance and intensity information.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: October 30, 2018
    Assignee: Rockwell Automation Safety AG
    Inventors: Eric Lutz, Carl Meinherz, Martin Hardegger
  • Patent number: 10116922
    Abstract: Disclosed herein are methods, devices, and non-transitory computer readable media that relate to stereoscopic image creation. A camera captures an initial image at an initial position. A target displacement from the initial position is determined for a desired stereoscopic effect, and an instruction is provided that specifies a direction in which to move the camera from the initial position. While the camera is in motion, an estimated displacement from the initial position is calculated. When the estimated displacement corresponds to the target displacement, the camera automatically captures a candidate image. An acceptability analysis is performed to determine whether the candidate image has acceptable image quality and acceptable similarity to the initial image. If the candidate image passes the acceptability analysis, a stereoscopic image is created based on the initial and candidate images.
    Type: Grant
    Filed: October 7, 2016
    Date of Patent: October 30, 2018
    Assignee: Google LLC
    Inventors: Jonathan Huang, Samuel Kvaalen, Peter Bradshaw
  • Patent number: 10109055
    Abstract: A machine vision system and method uses captured depth data to improve the identification of a target object in a cluttered scene. A 3D-based object detection and pose estimation (ODPE) process is use to determine pose information of the target object. The system uses three different segmentation processes in sequence, where each subsequent segmentation process produces larger segments, in order to produce a plurality of segment hypotheses, each of which is expected to contain a large portion of the target object in the cluttered scene. Each segmentation hypotheses is used to mask 3D point clouds of the captured depth data, and each masked region is individually submitted to the 3D-based ODPE.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: October 23, 2018
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Liwen Xu, Joseph Chi Tai Lam, Alex Levinshtein
  • Patent number: 10104359
    Abstract: A disparity value deriving device includes a calculator configured to calculate costs of candidates for a corresponding region in a comparison image that corresponds to a reference region in a reference image, based on luminance values of the regions. The device also includes a changer configured to change a cost exceeding a threshold to a value higher than the threshold; a synthesizer configured to synthesize a cost of a candidate for a corresponding region for one reference region after the change and a cost of a candidate for a corresponding region for another reference region after the change; and a deriving unit configured to derive a disparity value based on a position of the one reference region and a position of the corresponding region in the comparison image for which the cost after the synthesis is smallest.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: October 16, 2018
    Assignee: RICOH COMPANY, LIMITED
    Inventors: Kiichiroh Saitoh, Yoshikazu Watanabe, Soichiro Yokota, Ryohsuke Tamura, Wei Zhong
  • Patent number: 10102761
    Abstract: A route prediction unit estimates a route of an object of interest with respect to a target object based on collision avoidance models. A collision risk estimation unit calculates collision risks between the object of interest and target object for each collision avoidance model. A collision deciding unit decides the presence or absence of a collision from the collision risks and feeds back a collision avoidance model correction value to the route prediction unit when it is determined that the collision occurs. A collision avoidance route selector selects any of the plurality of collision avoidance models in which the absence of collision is decided by the collision deciding unit, and selects a route of the collision avoidance model as a route for avoiding the collision between the objects. The route prediction unit performs a new route prediction using the collision avoidance model correction value.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: October 16, 2018
    Assignee: Mitsubishi Electric Corporation
    Inventors: Yuki Takabayashi, Hiroshi Kameda
  • Patent number: 10095945
    Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.
    Type: Grant
    Filed: October 18, 2016
    Date of Patent: October 9, 2018
    Assignee: Nant Holdings IP, LLC
    Inventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
  • Patent number: 10095930
    Abstract: A monitoring method and system are disclosed. In one embodiment, a method includes monitoring the health of a person in a home. The method includes capturing a video sequence from a first camera disposed within the home, including capturing two-dimensional image data for the video sequence; receiving depth data corresponding to the two-dimensional data, and associating the depth data with the video sequence as metadata; setting a plurality of events to monitor associated with the person, the events defined to include actions captured from the first camera, at least a first event including the person's body being in a particular bodily position and performing video content analysis on the video sequence to determine whether the events have occurred.
    Type: Grant
    Filed: May 10, 2016
    Date of Patent: October 9, 2018
    Assignee: AVIGILON FORTRESS CORPORATION
    Inventors: Peter L. Venetianer, Gary W. Myers, Zhong Zhang
  • Patent number: 10096116
    Abstract: The present invention provides a method and an apparatus for real time object segmentation of 3D image data based on local feature correspondences between a plurality of views. In order to reduce the computational effort of object segmentation of 3D image data, the segmentation process is performed based on correspondences relating to local features of the image data and a depth map. In this way, computational effort can be significantly reduced and the image segmentation can be carried out very fast.
    Type: Grant
    Filed: December 12, 2012
    Date of Patent: October 9, 2018
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Giovanni Cordara, Imed Bouazizi, Lukasz Kondrad
  • Patent number: 10092373
    Abstract: The present invention discloses orthodontic treatment planning which enables designing the desired smile line with the help of a computer workstation. A new lip trace feature enables cutting out the portion of a photograph corresponding to the area inside the lips. The 3D model of the teeth is then completely visible when overlaid with the facial photograph. One can use this feature to superimpose the patient photo over the 3-D model of the teeth to see how much intrusion or extrusion is needed to design the smile line. Three dimensional model of dentition obtained by scanning of teeth along with the patient's two dimensional facial photograph is used to design the desired smile line for the patient by means of software instructions in the workstation.
    Type: Grant
    Filed: January 1, 2014
    Date of Patent: October 9, 2018
    Assignee: OraMetrix, Inc.
    Inventors: Charles L Abraham, Phillip Getto, Peer Sporbert, Markus Kaufmann
  • Patent number: 10088318
    Abstract: Techniques are provided which may be implemented using various methods and/or apparatuses in a mobile device to provide cradle insensitive inertial navigation. An example method for determining alignment changes between a first body frame and a second body frame according to the disclosure includes obtaining one or more images from an image capture device associated with the first body frame in response to detecting a change in alignment between the first body frame and the second body frame, determining a compensation information based on an analysis of the one or more images, and determining a position based on one or more inertial sensors and the compensation information.
    Type: Grant
    Filed: August 27, 2015
    Date of Patent: October 2, 2018
    Assignee: QUALCOMM Incorporated
    Inventor: Wentao Zhang
  • Patent number: 10089739
    Abstract: A method of image processing in a structured light imaging system is provided that includes receiving a captured image of a scene, wherein the captured image is captured by a camera of a projector-camera pair, and wherein the captured image includes a binary pattern projected into the scene by the projector, applying a filter to the rectified captured image to generate a local threshold image, wherein the local threshold image includes a local threshold value for each pixel in the rectified captured image, and extracting a binary image from the rectified captured image wherein a value of each location in the binary image is determined based on a comparison of a value of a pixel in a corresponding location in the rectified captured image to a local threshold value in a corresponding location in the local threshold image.
    Type: Grant
    Filed: June 4, 2014
    Date of Patent: October 2, 2018
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventor: Vikram VijayanBabu Appia
  • Patent number: 10091490
    Abstract: In one implementation, a system for using a scan recommendation includes a receiver engine to receive a plurality of pictures of a three-dimensional (3D) object from a scanner, a model engine to generate a 3D model of the 3D object by aligning the plurality of pictures of the 3D object, an analysis engine to analyze the 3D model for a volume, a shape, and a color of the 3D object, wherein the volume, the shape, and the color analysis is used to generate scan recommendations, and a display engine to display information relating to the scan recommendations based on the volume, the shape, and the color analysis of the 3D model of the 3D object.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: October 2, 2018
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Divya Sharma, Daniel R. Tretter, Diogo Strube de Lima, Ilya Gerasimets
  • Patent number: 10089519
    Abstract: An image processing apparatus includes a plurality of correlation processing units configured to perform correlation relating to search evaluation in parallel, a plurality of search frame processing units configured to clip from an input image a partial image to be input to the plurality of correlation processing units according to a designated processing frame, and a search frame determination unit configured to determine the arrangement of the processing frame in a vertical direction based on the number of correlation processing units and the size of the processing frame in the vertical direction. The arrangement of the processing frame in the vertical direction is determined such that the product of an interval between the processing frames in the vertical direction and the number of correlation processing units is equal to or larger than the size of the processing frame in the vertical direction.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: October 2, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasushi Ohwa
  • Patent number: 10085008
    Abstract: The present disclosure relates to an image processing apparatus and method which can improve convenience of a service using a multi-viewpoint image. An image processing apparatus includes an encoding unit which encodes multi-viewpoint image data which is data of a multi-viewpoint image composed of a plurality of images with mutually different viewpoints, and an association processing unit which associates sequence information showing a relative position relationship of each viewpoint of the multi-viewpoint image, and eye number information showing a number of viewpoints of the multi-viewpoint image, with multi-viewpoint image encoding data in which the multi-viewpoint image data has been encoded by the encoding unit.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: September 25, 2018
    Assignee: SONY CORPORATION
    Inventors: Kengo Hayasaka, Hironori Mori, Katsuhisa Ito
  • Patent number: 10082245
    Abstract: The present invention relates to a safety photoelectric barrier for monitoring a protective field and to a corresponding method. A safety photoelectric barrier (100) comprises a single-sided transceiver bar with a housing (102), a plurality of transceiver modules (104) each having a radiation emitting unit (112) for emitting radiation towards a reference target (108), a radiation detecting unit (114) for detecting radiation incident on the transceiver module (104), and a signal processing unit for evaluating the detected radiation regarding a distance information and an intensity information and for generating a binary output signal indicating the presence or absence of an object within the protective field. A controller module (126) evaluates the binary output signals and generates a safety signal in response to the evaluated output signals. The radiation detecting unit comprises at least a first and a second photosensitive element (114) for redundantly evaluating the distance and intensity information.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: September 25, 2018
    Assignee: Rockwell Automation Safety AG
    Inventors: Eric Lutz, Carl Meinherz, Martin Hardegger
  • Patent number: 10082671
    Abstract: A head-mounted display includes: a distance measurement unit that measures a distance to a target present in a predetermined range; an imaging unit that images the predetermined range and acquires different preliminary information from the measured distance; and a distance decision unit that decides a distance to the target included in a second region based on the preliminary information acquired in regard to the target included in a measured first region when the first region and the second region different from the first region are included in the predetermined range in which the target is imaged.
    Type: Grant
    Filed: January 7, 2016
    Date of Patent: September 25, 2018
    Assignee: SEIKO EPSON CORPORATION
    Inventor: Naoto Aruga
  • Patent number: 10076703
    Abstract: A system for interfacing with an interactive program is provided, including: a computing device for executing the interactive program, the interactive program being stored on a memory, the computing device enabling user control and input; a display device for rendering image content associated with an interactive program, the display device being configured to be attached to the user; wherein the computing device is configured to receive data from an image capture device to determine and track a position of the display device; wherein the computing device is configured to define at least two interactive zones, each interactive zone being defined by a spatial region configured to change a function of the display device when the display device is moved between the at least two interactive zones; and wherein the computing device is configured to set the function of the display device.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: September 18, 2018
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Xiaodong Mao, Noam Rimon
  • Patent number: 10078913
    Abstract: Method for capturing an environment with objects, using a 3D camera, wherein the images of the cameras captured at different moments in time are used to generate 3D models, and wherein accuracy values are assigned to segments of the models allowing efficient refining of the models using the accuracy values.
    Type: Grant
    Filed: April 9, 2015
    Date of Patent: September 18, 2018
    Assignee: Alcatel Lucent
    Inventors: Patrice Rondao Alface, Vinay Namboodiri, Maarten Aerts
  • Patent number: 10075695
    Abstract: An information processing method and device are disclosed. The information processing method is applied to an information processing device in which a 3D map and a spatial topological structure management-based feature library created in advance for a certain environment are contained, and different users in the certain environment can determine their location. The method includes acquiring a first image taken by a first user; extracting one or more first feature points in the first image to obtain first feature descriptors; obtaining 3D locations of the first feature points based on 3D location of the first user, the first image, and the feature library; determining feature descriptors to be updated based on 3D location of the first user, the 3D locations of the first feature points, the first feature descriptors corresponding to the first feature points, and existing feature descriptors in the feature library; and updating the feature library.
    Type: Grant
    Filed: December 3, 2014
    Date of Patent: September 11, 2018
    Assignee: LENOVO (BEIJING) CO., LTD.
    Inventor: Hao Shen
  • Patent number: 10074026
    Abstract: A vehicle type recognition method based on a laser scanner is provided, the method includes detecting that a vehicle to be checked has entered into a recognition area; causing a laser scanner to move relative to the vehicle to be checked; scanning the vehicle to be checked using the laser scanner on a basis of columns, and storing and splicing data of each column obtained by scanning to form a three-dimensional image of the vehicle to be checked, wherein a lateral width value is specified for each single column of data; specifying a height difference threshold; and determining a height difference between the height at the lowest position of the vehicle to be checked in data of column N and the height at the lowest position of the vehicle to be checked in data of specified number of columns preceding and/or succeeding to the column N.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: September 11, 2018
    Assignee: NUCTECH COMPANY LIMITED
    Inventors: Shangmin Sun, Yanwei Xu, Qiang Li, Weifeng Yu, Yu Hu
  • Patent number: 10070577
    Abstract: A method of using an unmanned agricultural robot to generate an anticipatory geospatial data map of the positions of annual crop rows planted within a perimeter of an agricultural field, the method including the step of creating a geospatial data map of an agricultural field by plotting actual annual crop row positions in a portion of the geospatial data map that corresponds to a starting point observation window, and filling in a remainder of the geospatial data map with anticipated annual crop row positions corresponding to the annual crop rows outside of the starting point observation window, and refining the geospatial data map by replacing the anticipated annual crop row positions with measured actual annual crop row positions when an unexpected obstacle is encountered.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: September 11, 2018
    Assignee: RowBot Systems LLC
    Inventor: Kent Cavender-Bares
  • Patent number: 10068339
    Abstract: An image processing device includes an acquiring unit that acquires plural images in which an object is captured from multiple directions; a calculation unit that calculates a value representing a quality of the images; a determining unit that determines a process for measuring a surface shape of the object depending on the quality of the images; and an execution unit that executes the process which is determined.
    Type: Grant
    Filed: September 24, 2015
    Date of Patent: September 4, 2018
    Assignee: TOPPAN PRINTING CO., LTD.
    Inventors: Hiroki Unten, Tatsuya Ishii
  • Patent number: 10066958
    Abstract: A user terminal device is provided. The user terminal device includes a display configured to display a map screen, a detector configured to sense a user manipulation, and a controller configured to, when a location and a direction are determined by the user manipulation on the map screen, display at least one photo image corresponding to the location and the direction of the user manipulation on the map screen.
    Type: Grant
    Filed: August 19, 2014
    Date of Patent: September 4, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jong-woo Jung, In-sik Myung, Taik-heon Rhee, Dong-bin Cho
  • Patent number: 10068385
    Abstract: Techniques are provided for generation of synthetic 3-dimensional object image variations for training of recognition systems. An example system may include an image synthesizing circuit configured to synthesize a 3D image of the object (including color and depth image pairs) based on a 3D model. The system may also include a background scene generator circuit configured to generate a background for each of the rendered image variations. The system may further include an image pose adjustment circuit configured to adjust the orientation and translation of the object for each of the variations. The system may further include an illumination and visual effect adjustment circuit configured to adjust illumination of the object and the background for each of the variations, and to further adjust visual effects of the object and the background for each of the variations based on application of simulated camera parameters.
    Type: Grant
    Filed: December 15, 2015
    Date of Patent: September 4, 2018
    Assignee: Intel Corporation
    Inventors: Amit Bleiweiss, Chen Paz, Ofir Levy, Itamar Ben-Ari, Yaron Yanai
  • Patent number: 10060728
    Abstract: A three-dimensional object-measurement device comprising: a laser beam irradiation unit which irradiates a laser beam; a focal length changing unit which changes a focal length of the laser beam; an imaging unit which images a reflected light which is a reflection of the laser beam on an object, and generates image data; a control unit which changes the focal length by controlling the focal length changing unit, irradiates the same point on an object to be measured with the laser beam a plurality of times while varying the focal length, and makes the imaging unit generate the plurality of image data; and a distance calculation unit which calculates a distance to the object to be measured by processing the plurality of image data.
    Type: Grant
    Filed: April 5, 2013
    Date of Patent: August 28, 2018
    Assignee: NEC CORPORATION
    Inventor: Fujio Okumura
  • Patent number: 10062097
    Abstract: Devices, systems, and methods include a three-dimensional (3D) scanning element, an electronic data storage configured to store a database including fields for 3D scan data and demographic information, a processor, and a user interface. In an example, the processor obtains 3D scan data of a body part of a subject from the 3D scanning element, analyzes the 3D scan data for incomplete regions, generate a composite 3D image of 3D scan data from the database based on similarities of demographic information, and overlays composite 3D image regions corresponding to incomplete regions on the 3D scan data.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: August 28, 2018
    Assignee: NIKE, Inc.
    Inventor: Christopher Andon
  • Patent number: 10063838
    Abstract: An information processor includes: a similarity data generation portion generating similarity data that represents the calculated similarity to the image in the reference block in association with a position within the search range; a result evaluation portion detecting a position with a maximum similarity value for each piece of the similarity data and screening the detection result by making a given evaluation of the similarity; a depth image generation portion finding a parallax for each of the reference blocks using the detection result validated as a result of screening, calculating a position of a subject in a depth direction on a basis of the parallax, and generating a depth image by associating the position of the subject in the depth direction with an image plane; and an output information generation section performing given information processing on a basis of the subject position in a three-dimensional space using the depth image and outputting the result of information processing.
    Type: Grant
    Filed: July 17, 2015
    Date of Patent: August 28, 2018
    Assignees: Sony Corporation, Sony Interactive Entertainment Inc.
    Inventors: Hiroyuki Segawa, Akio Ohba, Tetsugo Inada, Hirofumi Okamoto, Atsushi Kimura, Takashi Kohashi, Takashi Itou
  • Patent number: 10063833
    Abstract: A method of controlling a stereo convergence and an image processor using the method are provided. The method includes: detecting objects from a stereo image; grouping the detected objects into at least one or more objects according to setup specification information; and moving at least one of left and right eye images of a stereo image of the grouping or a setup area including the grouping in a horizontal or vertical direction or in the horizontal and vertical directions.
    Type: Grant
    Filed: July 30, 2014
    Date of Patent: August 28, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Irina Kim
  • Patent number: 10062005
    Abstract: A method of matching images A and B of the same scene taken at different locations in the scene is provided by matching correspondence points in the image by evaluating pixel characteristics from nearby regions using a constellation of image chips and utilizing joint information across multiple resolution levels in a probability framework. Since each image chip is small, each chip in one image potentially can be matched with a number of chips in the other image. The accumulation of evidence (probability) over all image chips within the constellation over multiple resolution levels reduces the ambiguity. The use of a constellation of image chips removes the requirement present in most visual point matching techniques to special feature points (e.g. corner points) as the correspondence points.
    Type: Grant
    Filed: March 17, 2015
    Date of Patent: August 28, 2018
    Assignee: Teledyne Scientific & Imaging, LLC
    Inventor: Austin I. Eliazar
  • Patent number: 10057562
    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.
    Type: Grant
    Filed: April 11, 2016
    Date of Patent: August 21, 2018
    Assignee: Facebook, Inc.
    Inventors: Brian Keith Cabral, Forrest Samuel Briggs, Albert Parra Pozo, Peter Vajda
  • Patent number: 10048687
    Abstract: A system for operating a mobile platform using distance and speed information and methods for making and using same. The system includes a time-of-flight sensor and an ultrasound sensor for measuring a distance between the mobile platform and an obstacle and a processor for integrating the sensed measurements and controlling the mobile platform. The distance measured by the time-of-flight sensor can be determined via a phase shift method. The time-of-flight sensor can also be used to measure a speed of the mobile platform by imaging a reference point at different times and using stereopsis to ascertain the displacement. Distances and speeds measured using the time-of-flight and ultrasound sensors can be integrated to improve measurement accuracy. The systems and methods are suitable for use in controlling any type of mobile platform, including unmanned aerial vehicles and advantageously can be applied for avoiding collisions between the mobile platform and the obstacle.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: August 14, 2018
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Cong Zhao, Ketan Tang, Guyue Zhou
  • Patent number: 10048763
    Abstract: Disclosed herein are techniques for scaling and translating gestures such that the applicable gestures for control may vary depending on the user's distance from a gesture-based system. The techniques for scaling and translation may take the varying distances from which a user interacts with components of the gesture-based system, such as a computing environment or capture device, into consideration with respect to defining and/or recognizing gestures. In an example embodiment, the physical space is divided into virtual zones of interaction, and the system may scale or translate a gesture based on the zones. A set of gesture data may be associated with each virtual zone such that gestures appropriate for controlling aspects of the gesture-based system may vary throughout the physical space.
    Type: Grant
    Filed: August 18, 2014
    Date of Patent: August 14, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Otto G. Berkes, Steven Bathiche, John Clavin, Ian LeGrow, Joseph Reginald Scott Molnar
  • Patent number: 10049265
    Abstract: Methods and apparatus to monitor environments are disclosed. An example method includes triggering a two-dimensional recognition analysis, in connection with a first frame of data, on two-dimensional data points representative of an object detected in an environment, the triggering based on satisfying a first trigger event, the first trigger event one of (1) a distance between the object and a sensor of the environment satisfying a threshold distance, or (2) elapsing of a time interval. In response to determining that the object is recognized as a person in the first frame, triggering the two-dimensional recognition analysis in connection with a second frame, the second frame subsequent to the first frame, the two-dimensional recognition analysis of the second frame performed on two-dimensional data points representative of a location in the second frame corresponding to the location of the person in the first frame.
    Type: Grant
    Filed: November 18, 2016
    Date of Patent: August 14, 2018
    Assignee: The Nielsen Company (US), LLC
    Inventors: Morris Lee, Alejandro Terrazas
  • Patent number: 10051286
    Abstract: Devices and methods for generating a three-dimensional video stream starting from a sequence of video images. The sequence includes a first view (V0), at least one second view (V1) of a scene, and a depth map (D0) of said first view (V0), or a disparity map of said at least one second view (V1) with respect to the first view (V0). At least one occlusion image (O1) including the occluded pixels of said second view (V1) is obtained by starting from said depth map (D0) or from said disparity map. A compacted occlusion image (OC1)is generated by spatially repositioning said occluded pixels of said at least one occlusion image (O1), so as to move said pixels closer to one another. The three-dimensional video stream may include said first view (V0), said depth map (D0) or said disparity map, and said at least one compacted occlusion image (OC1).
    Type: Grant
    Filed: May 7, 2013
    Date of Patent: August 14, 2018
    Assignee: S.I.SV.EL SOCIETA' ITALIANA PER LO SVILUPPO DELL'ELETTRONICA S.P.A.
    Inventors: Marco Grangetto, Maurizio Lucenteforte
  • Patent number: 10049454
    Abstract: According to examples of the presently disclosed subject an active triangulation system includes an active triangulation setup and a calibration module. The active triangulation setup includes a projector and a sensor. The projector is configured to project a structured light pattern that includes a repeating structure of a plurality of unique feature types and a plurality of markers distributed in the projected structured light pattern, where an epipolar distance between any two epipolar lines which are associated with an appearance in the image of any two respective markers is greater than a distance between any two distinguishable epipolar lines. The sensor is configured to capture an image of a reflected portion of the projected structured light. The calibration module is configured to determine an epipolar field for the active triangulation setup according to locations of the markers in the image, and to calibrate the active triangulation setup.
    Type: Grant
    Filed: June 24, 2015
    Date of Patent: August 14, 2018
    Assignee: MANTIS VISION LTD.
    Inventors: Michael Slutsky, Yonatan Samet, Eyal Gordon
  • Patent number: 10043100
    Abstract: Techniques are disclosed for generating logical sensors for an image driver. The image driver monitors values corresponding to at least a first feature in one or more regions of a first image in a stream of images received by a first sensor. The image driver identifies at least a first correlation between at least a first and second value of the monitored values. The image driver generates a logical sensor based on the identified correlations. The logical sensor samples one or more features corresponding to the identified correlation from a second image in the stream of images.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: August 7, 2018
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Patent number: 10037335
    Abstract: Methods and systems related to the detection of 3-D video content are disclosed herein. Specifically, a video image file may be analyzed in order to determine if it contains 3-D stereoscopic video content. An assumption is made regarding the possible 3-D format of the video image file. The assumption could be that the video frame includes a left portion and a right portion where each portion contains respective stereoscopic image perspectives. Image analysis algorithms could be used to determine if the left and right portions are sufficiently similar to confirm the assumption. If so, an indication could be carried out that could include a change to metadata or a similar change to associated video image file information. If the left and right portions of the video frame are not sufficiently similar, another analysis may be performed to test a different 3-D file format assumption.
    Type: Grant
    Filed: July 2, 2015
    Date of Patent: July 31, 2018
    Assignee: GOOGLE LLC
    Inventors: Samuel Kvaalen, Jonathan Huang, Peter Bradshaw
  • Patent number: 10033928
    Abstract: Images may be obtained using a moving camera comprised of two or more rigidly mounted image sensors. Camera motion may change camera orientation when different portions of an image are captured. Pixel acquisition time may be determined based on image exposure duration and position of the pixel in the image array (pixel row index). Orientation of the sensor may at the pixel acquisition time instance may be determined. Image transformation may be performed wherein a given portion of the image may be associated with a respective transformation characterized by the corrected sensor orientation. In some implementations of panoramic image acquisition, multiple source images may be transformed to, e.g., equirectangular plane, using sensor orientation that is corrected for the time of pixel acquisition. Use of orientation correction may improve quality of stitching by, e.g., reducing contrast of border areas between portions of the transformed image obtained by different image sensors.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: July 24, 2018
    Assignee: GoPro, Inc.
    Inventor: Antoine Meler