Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 10268926
    Abstract: The present application discloses a method and an apparatus for processing point cloud data. The method in an embodiment comprises: presenting a to-be-labeled point cloud frame and a camera image formed by photographing an identical scene at an identical moment as the point cloud frame; determining, in response to an operation of selecting an object in the point cloud frame by a user, an area encompassing the selected object in the point cloud frame; projecting the area from the point cloud frame to the camera image, to obtain a projected area in the camera image; and adding a mark in the projected area, for labeling, by the user, the selected object in the point cloud frame according to the mark indicating an object in the camera image. This implementation can assist labeling personnel in rapidly and correctly labeling an object in a point cloud frame.
    Type: Grant
    Filed: January 19, 2017
    Date of Patent: April 23, 2019
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Kaiwen Feng, Zhuo Chen, Bocong Liu, Yibing Liang, Yu Ma, Haifeng Wang
  • Patent number: 10256923
    Abstract: Provided are a method and a device for generating a MIMO test signal which is configured to test a performance of MIMO wireless terminal. With the method, the plurality of space propagation matrixes of the MIMO testing system are acquired by performing the phase shift transformation on the plurality of calibration matrixes of the MIMO testing system, the target space propagation matrix having the isolation degree meeting the preset condition is determined according to the maximum amplitude value of elements in each inverse matrix of the plurality of space propagation matrixes, and the transmitting signal for test is generated by a calculation according to the throughput testing signal acquired by the pre-calculation and the target calibration matrix corresponding to the target space propagation matrix.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: April 9, 2019
    Assignee: GENERAL TEST SYSTEMS INC.
    Inventors: Yihong Qi, Wei Yu, Penghui Shen
  • Patent number: 10242504
    Abstract: A head-mounted display device includes an image display having an optical element to transmit light from an outside scene, a display element to display an image, a camera, a memory configured to store data of a marker image, and one or more processors. The one or more processors display an image on the image display and derive at least one of a camera parameter of the camera and a spatial relationship, based at least on an image that is captured by the camera in a condition that allows a user to visually perceive that the marker image displayed by the image display and a real marker corresponding to the marker image align at least partially with each other. The real marker includes a first set of feature elements within a rectangle, and the marker image includes a second set of feature elements.
    Type: Grant
    Filed: September 6, 2018
    Date of Patent: March 26, 2019
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Jia Li, Guoyi Fu
  • Patent number: 10229502
    Abstract: A depth detection apparatus is described which has a memory and a computation logic. The memory stores frames of raw time-of-flight sensor data received from a time-of-flight sensor, the frames having been captured by a time-of-flight camera in the presence of motion such that different ones of the frames were captured using different locations of the camera and/or with different locations of an object in a scene depicted in the frames. The computation logic has functionality to compute a plurality of depth maps from the stream of frames, whereby each frame of raw time-of-flight sensor data contributes to more than one depth map.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: March 12, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Amit Adam, Sebastian Nowozin, Omer Yair, Shai Mazor, Michael Schober
  • Patent number: 10222876
    Abstract: A system includes a first sensor configured to measure a location of a first device, a second sensor configured to measure an orientation of a second device, a display, and a processor. The processor is configured to control the first sensor to start a first measurement, calculate distances between the location of the first device and each of a plurality of installation locations associated with each of a plurality of objects, each of the plurality of objects being arranged virtually at each of a plurality of installation locations in a real space, control the second sensor to start a second measurement when a distribution of the distances indicates that any of the plurality of installation locations of the plurality of objects is included in a given range from the first device, and control the display to display an object according to results of the first measurement and the second measurement.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: March 5, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Susumu Koga
  • Patent number: 10207410
    Abstract: A system and apparatus for navigating and tracking a robotic platform includes a non-contact velocity sensor module set positioned on the robotic platform for measuring a velocity of the robotic platform relative to a target surface. The non-contact velocity sensor module set may include a coherent light source that is emitted towards the target surface and reflected back to the coherent light source. Measuring the change in intensity of the reflected coherent light source may be used to determine the velocity of the robotic platform based on the its relationship with the principles of a Doppler frequency shift. A communication unit may also be utilized to transmit data collected from the non-contact velocity sensor set to a computer for data processing. A computer is then provided on the robotic platform to process data collected from the non-contact velocity sensor set.
    Type: Grant
    Filed: September 15, 2016
    Date of Patent: February 19, 2019
    Assignee: Physical Optics Corporation
    Inventors: Paul Shnitser, David Miller, Christopher Thad Ulmer, Volodymyr Romanov, Victor Grubsky
  • Patent number: 10205859
    Abstract: Mounting device 11 captures an image of reference mark 25 at a first position at which mounting head 22 is stopped under first imaging conditions, and captures an image of component 60 under second imaging conditions. Then, mounting device 11 moves mounting head 22 and captures an image of reference mark 25 at a second position at which mounting head 22 is stopped under the first imaging conditions, and captures a second image of component 60 under the second imaging conditions. Further, mounting device 11 generates an image of component 60 picked up by mounting head 22 based on the positional relationship of reference mark 25 using the first image and the second image.
    Type: Grant
    Filed: July 6, 2015
    Date of Patent: February 12, 2019
    Assignee: FUJI CORPORATION
    Inventors: Masafumi Amano, Kazuya Kotani
  • Patent number: 10198830
    Abstract: An information processing apparatus includes a correlation unit that correlates distance information indicating a distance to an emission position of electromagnetic waves emitted in a shooting direction of a plurality of image pickup units with a first pixel in a first image that constitutes images taken by the image pickup units, the distance information being obtained based on reflected waves of the electromagnetic waves and the first pixel corresponding to the emission position of the electromagnetic waves, and a generation unit that generates a parallax image by using the distance information correlated with the first pixel for parallax computation of pixels in a second image that constitutes the images.
    Type: Grant
    Filed: December 22, 2014
    Date of Patent: February 5, 2019
    Assignee: Ricoh Company, Ltd.
    Inventors: Hiroyoshi Sekiguchi, Soichiro Yokota, Shuichi Suzuki, Shin Aoki, Haike Guan, Jun Yoshida, Mitsuru Nakajima, Hideomi Fujimoto
  • Patent number: 10191183
    Abstract: PROBLEM: Efficiently generating digital terrain model (DTM) having high elevation surface accuracy and high point density, and suitable for controlling pavement milling machines during road repairs. SOLUTION: Combination of motorized levelling and Stop-Go mobile laser scanning system, including train of three vehicles, which are at standstill during measurements, and which move in unison in between measurements. Middle vehicle carries laser scanner, elevation sight, and GNSS receiver. Front and rear vehicle each carry levelling rod; front vehicle also carries GNSS receiver. During measurement cycle, laser scanner generates point cloud, while GNSS positions of middle and front vehicles and elevations at the resp. positions of front and rear vehicles are determined. After measurement cycle, vehicle train moves until rear vehicle halts at previous GNSS position of front vehicle, etc. When all measurement cycles are completed, collected data is integrated and transformed into a DTM.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: January 29, 2019
    Assignee: R.O.G. s.r.o.
    Inventors: Marek Prikryl, Lukas Kutil, Vitezslav Obr
  • Patent number: 10192332
    Abstract: A method of controlling display of object data includes calculating distances from a terminal to the positions of multiple items of the object data, determining, by a processor, an area based on the distribution of the calculated distances, and displaying object data associated with a position in the determined area on a screen.
    Type: Grant
    Filed: March 17, 2016
    Date of Patent: January 29, 2019
    Assignee: FUJITSU LIMITED
    Inventor: Susumu Koga
  • Patent number: 10194079
    Abstract: A vehicle surveillance system is disclosed and includes a plurality of image capturing units, an image processing unit, and a display unit for monitoring a position of at least one target around a vehicle and measuring a distance between the at least one target and the vehicle. The vehicle surveillance system utilizes a space domain determination module, a time domain determination module, and a ground surface elimination module to transform original images of the target around the vehicle into the bird's-eye-view panorama, and further detects variation of an optical pattern incident onto the target through a light beam emitted by a light source so as to effectively remind a driver on the on-going vehicle of the position of the target and the distance, and real-time detect and capture an image of any person approaching the vehicle, thereby achieving driving safety and securing lives and personal properties.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: January 29, 2019
    Assignee: H.P.B. OPTOELECTRONIC CO., LTD.
    Inventors: Hsuan-Yueh Hsu, Szu-Hong Wang
  • Patent number: 10180729
    Abstract: A human machine interface (HMI) minimizing the number of gestures for operation control in which user-intended operation commands are accurately recognized by dividing a vehicle interior into a plurality of regions. The HMI receives an input of a gesture according to each region, and controls any one device according to the gesture. Convenience of a user is improved because the gesture may be performed in a state in which region restriction is minimized by identifying an operation state and an operation pattern of an electronic device designated according to each region to recognize the user's intention when the user performs the gesture in a boundary portion between two regions or even in multiple regions.
    Type: Grant
    Filed: December 8, 2014
    Date of Patent: January 15, 2019
    Assignee: Hyundai Motor Company
    Inventor: Hyungsoon Park
  • Patent number: 10169680
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 1, 2019
    Assignee: LUMINAR TECHNOLOGIES, INC.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10169678
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 1, 2019
    Assignee: LUMINAR TECHNOLOGIES, INC.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10168523
    Abstract: An image generating system that generates a focal image of a target object on a virtual focal plane located between a plurality of illuminators and an image sensor (b) carries out the following (c) through (f) for each of a plurality of pixels constituting the focal image, (c) carries out the following (d) through (f) for each of the positions of the plurality of illuminators, (d) calculates a position of a target point that is a point of intersection of a straight line connecting a position of the pixel on the focal plane and a position of the illuminator and a light receiving surface of the image sensor, (e) calculates a luminance value of the target point in the captured image by the illuminator on the basis of the position of the target point, (f) applies the luminance value of the target point to the luminance value of the pixel, and (g) generates the focal image of the target object on the focal plane by using a result of applying the luminance value at each of the plurality of pixels.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 1, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventors: Yumiko Kato, Yoshihide Sawada, Masahiro Iwasaki, Yasuhiro Mukaigawa, Hiroyuki Kubo
  • Patent number: 10165247
    Abstract: An image device with image defocus function includes an image capture unit, a depth map generation unit, and a processor. The image capture unit captures an image corresponding to an object. The depth map generation unit generates a depth map corresponding to the object. The processor determines an integration block corresponding to each pixel image of the image according to a depth of the depth map corresponding to the each pixel image and a predetermined depth corresponding to the each pixel image, utilizes the integration block to generate a defocus color pixel value corresponding to the each pixel image, and outputs a defocus image corresponding to the image according to defocus color pixel values corresponding to all pixel images of the image, wherein the each pixel image of the image corresponds to a pixel of the image capture unit.
    Type: Grant
    Filed: May 16, 2016
    Date of Patent: December 25, 2018
    Assignee: eYs3D Microelectronics, Co.
    Inventor: Chi-Feng Lee
  • Patent number: 10157312
    Abstract: A method may comprise obtaining image data of a plurality of digital image frames captured of an assumed eye having an iris and a pupil while illuminating the assumed eye from different directions; and automatically detecting a characteristic feature of the iris on the basis of image data of the at least two of the plurality of digital image frames.
    Type: Grant
    Filed: October 8, 2015
    Date of Patent: December 18, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jyri Rauhala, Pertti Husu
  • Patent number: 10157452
    Abstract: Images may be acquired by cameras and processed to detect the presence of objects. Described are techniques to use rectified images for further processing such as object detection, object identification, and so forth. In one implementation, an image from a camera is processed to produce a rectified image having an apparent perspective from overhead. For example, the rectified image may be an image that appears to have been obtained from a virtual camera that is above the shelf and having a field-of-view that looks downward. The rectified image may be an orthonormal projection of pixels in the acquired image, relative to a plane of the surface upon which the objects are resting.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: December 18, 2018
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Joseph Patrick Tighe, Vuong Van Le, Gerard Guy Medioni
  • Patent number: 10140147
    Abstract: Intelligently assisted IoT endpoint devices are disclosed. For example, an endpoint device determined to have an active network connection redirects input/output data from a physical sensor to a network interface. First redirected data of the input/output data is received, over a network by a virtual sensor of a virtualized endpoint service then output to a virtualized endpoint engine that resulting in conversion into first converted data, which is sent to an endpoint control service. The virtualized endpoint engine receives a first command from the endpoint control service then sends the first command to a virtual interactive element controller which converts the first command into a second command compatible with an interactive element of the endpoint device which performs a task after receiving the second command. The virtual sensor receives second redirected data collected by the physical sensor different from first redirected data as a result of the task.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: November 27, 2018
    Assignee: Sanctum Solutions Inc.
    Inventor: Noel Shepard Stephens
  • Patent number: 10140652
    Abstract: Methods for generating and sharing a virtual body model of a person, created with a small number of measurements and a single photograph, combined with one or more images of garments. The virtual body model represents a realistic representation of the users body and is used for visualizing photo-realistic fit visualizations of garments, hairstyles, make-up, and/or other accessories. The virtual garments are created from layers based on photographs of real garment from multiple angles. Furthermore the virtual body model is used in multiple embodiments of manual and automatic garment, make-up, and, hairstyle recommendations, such as, from channels, friends, and fashion entities. The virtual body model is sharable for, as example, visualization and comments on looks. Furthermore it is also used for enabling users to buy garments that fit other users, suitable for gifts or similar.
    Type: Grant
    Filed: May 27, 2016
    Date of Patent: November 27, 2018
    Assignee: METAIL LIMITED
    Inventors: Tom Adeyoola, Nick Brown, Nikki Trott, Edward Herbert, Duncan Robertson, Jim Downing, Nick Day, Robert Boland, Tom Boucher, Joe Townsend, Edward Clay, Tom Warren, Anoop Unadkat, Yu Chen
  • Patent number: 10139218
    Abstract: An image processing apparatus and an image processing method that allow a user to easily use distance information of a subject of a captured image for image processing are disclosed. The disclosed image processing apparatus generates, from the captured image, a plurality of images, which respectively corresponds to ranges of individual subject distances, based on the distance information of the subject. Furthermore, the image processing apparatus selectably displays the generated plurality of images, and applies image processing to the selected image.
    Type: Grant
    Filed: January 26, 2016
    Date of Patent: November 27, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Takahiro Matsushita
  • Patent number: 10115183
    Abstract: Embodiments of the present disclosure are directed to methods and systems for displaying an image on a user interface. The methods and systems include components modules and so on for determining a minimum feature width of the image and determining and a distance field of each region associated with the image. The distance field of each region may be based on the minimum feature width. A filter threshold associated with the distance field is then determined and the image is output using the determined filter threshold.
    Type: Grant
    Filed: September 2, 2014
    Date of Patent: October 30, 2018
    Assignee: Apple Inc.
    Inventors: Jacques P. Gasselin de Richebourg, Domenico P. Porcino, Nathaniel C. Begeman
  • Patent number: 10102427
    Abstract: A method of biometric recognition is provided. Multiple images of the face or other non-iris image and iris of an individual are acquired. If the multiple images are determined to form an expected sequence of images, the face and iris images are associated together. A single camera preferably acquires both the iris and face images by changing at least one of the zoom, position, or dynamic range of the camera. The dynamic range can be adjusted by at least one of adjusting the gain settings of the camera, adjusting the exposure time, and/or adjusting the illuminator brightness. The expected sequence determination can be made by determining if the accumulated motion vectors of the multiple images is consistent with an expected set of motion vectors and/or ensuring that the iris remains in the field of view of all of the multiple images.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: October 16, 2018
    Assignee: Eyelock LLC
    Inventors: Keith J. Hanna, Hector T. Hoyos
  • Patent number: 10094670
    Abstract: Aspects of the disclosure relate generally to condensing sensor data for transmission and processing. For example, laser scan data including location, elevation, and intensity information may be collected along a roadway. This data may be sectioned into quanta representing some period of time during which the laser sweeps through a portion of its field of view. The data may also be filtered spatially to remove data outside of a threshold quality area. The data within the threshold quality area for a particular quantum may be projected onto a two-dimensional grid of cells. For each cell of the two-dimensional grid, a computer evaluates the cells to determine a set of characteristics for the cell. The sets of characteristics for all of the cells of the two-dimensional grid for the particular quantum are then sent to a central computing system for further processing.
    Type: Grant
    Filed: October 21, 2014
    Date of Patent: October 9, 2018
    Assignee: Waymo LLC
    Inventors: Andrew Hughes Chatham, Michael Steven Montemerlo, Daniel Trawick Egnor
  • Patent number: 10088307
    Abstract: The invention provides a surveying instrument, which comprises a light emitting element for emitting a distance measuring light, a distance measuring light projecting unit for projecting the distance measuring light, a light receiving unit for receiving a reflected distance measuring light, a photodetection element for receiving the reflected distance measuring light and for producing a photodetection signal and a distance measuring unit for performing a distance measurement based on a light receiving result from the photodetection element, further comprises a first optical axis deflecting unit disposed on a projection optical axis of the distance measuring light for deflecting an optical axis of the distance measuring light at a deflection angle as required and in a direction as required, a second optical axis deflecting unit disposed on a light receiving optical axis for deflecting the reflected distance measuring light at the same deflection angle and in the same direction as the first optical axis deflect
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: October 2, 2018
    Assignee: Kabushiki Kaisha TOPCON
    Inventors: Fumio Ohtomo, Kaoru Kumagai
  • Patent number: 10083524
    Abstract: Methods and supporting systems calculate a three-dimensional orientation of a camera based on images of objects within the cameras field of view. A camera (such as a camera within a mobile phone), captures two-dimensional video content including a human body and at least one additional object and assigns a frame of the video content at a first time as an anchor frame. The human body within the anchor frame is modeled by assigning points of movement to a set of body elements. A subsequent frame of the video content is received in which the human body has moved, and a translation function is derived that calculates the three-dimensional position of each of the body elements based on two dimensional movements of the body elements between the anchor frame and the subsequent frame. Using the translation function, a three-dimensional relationship of the camera and the additional object is calculated.
    Type: Grant
    Filed: April 20, 2018
    Date of Patent: September 25, 2018
    Assignee: Octi
    Inventors: Abed Malti, Justin Fuisz
  • Patent number: 10081376
    Abstract: The present application involves a railroad track asset surveying system comprising an image capture sensor, a location determining system, and an image processor. The image capture sensor is mounted to a railroad vehicle. The location determining system holds images captured by the image capture sensor. The image processor includes an asset classifier and an asset status analyzer. The asset classifier detects an asset in one or more captured images and classifies the detected asset by assigning an asset type to the detected asset from a predetermined list of asset types according to one or more features in the captured image. The asset status analyzer identifies an asset status characteristic and compares the identified status characteristic to a predetermined asset characteristic so as to evaluate a deviation therefrom.
    Type: Grant
    Filed: December 29, 2015
    Date of Patent: September 25, 2018
    Inventor: Sameer Singh
  • Patent number: 10068127
    Abstract: An apparatus for automatic detection of the face in a given image and localization of the eye region which is a target for recognizing iris is provided. The apparatus includes an image capturing unit collecting an image of a user; and a control unit extracting a characteristic vector from the image of the user, fitting an extracted vector into a Pseudo 2D Hidden Markov Model (HMM), and an operating method thereof for detecting a face and facial features of the user.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: September 4, 2018
    Assignee: IRIS ID, INC.
    Inventors: Monalisa Mazumdar, Ravindra Gadde, Sehhwan Jung
  • Patent number: 10068141
    Abstract: An automatic operation vehicle that automatically performs an operation in an operation area is provided. The vehicle includes an image analysis unit configured to extract a marker included in an image captured by a camera provided on the automatic operation vehicle. If a situation in which the marker that was extracted from the image captured by the camera cannot be extracted by the image analysis unit from the image captured by the camera has occurred during a movement of the automatic operation vehicle in a constant direction. The image analysis unit determines whether the marker extracted before an occurrence of the situation and the marker extracted after an elimination of the situation are the same marker.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: September 4, 2018
    Assignee: Honda Motor Co., Ltd.
    Inventors: Yasuyuki Shiromizu, Tsutomu Mizoroke, Makoto Yamanaka, Yoshihisa Hirose, Minami Kigami, Kensei Yamashita
  • Patent number: 10060740
    Abstract: A distance detection device calculates a pixel cost, every reference pixel, based on a difference between reference pixel information in a reference image and comparative pixel information in a comparative image while switching the reference and comparative images. The device calculates a parallax cost, every reference pixel, representing a cost regarding a change-amount of the parallax as a coordinate difference between a reference pixel and a comparative pixel when the reference image is switched. The device calculates a combination of each reference pixel and a comparative pixel having a minimum total cost every reference pixel. The minimum total cost represents a sum of the pixel cost and the parallax cost. The device obtains a relationship of a corresponding point between each reference pixel and its corresponding comparative pixel, and calculates a distance to an object in each captured image based on the relationship of the corresponding point.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: August 28, 2018
    Assignee: DENSO CORPORATION
    Inventors: Kazuhisa Ishimaru, Noriaki Shirai
  • Patent number: 10041793
    Abstract: Some embodiments of the invention include a method for providing a 3D-point cloud using a geodetic surveying instrument. Some embodiments also include an image capturing unit. A method, according to some embodiments may include scanning a surrounding with the surveying instrument according to a defined scanning region with at least partly covering the object and generating a scanning point cloud corresponding to the scanning region with reference to a surveying coordinate system which is defined by the surveying instrument and generating a first image on side of the surveying instrument covering a region basically corresponding to the scanning region, the first image representing a reference image the pose of which is known with reference to the surveying coordinate system due to the position and orientation of the surveying instrument for acquiring data the first image is base on.
    Type: Grant
    Filed: April 19, 2016
    Date of Patent: August 7, 2018
    Assignee: HEXAGON TECHNOLOGY CENTER GMBH
    Inventors: Bernhard Metzler, Alexander Velizhev, Thomas Fidler
  • Patent number: 10037602
    Abstract: A method and apparatus for segmenting an image are provided. The method may include the steps of clustering pixels from one of a plurality of images into one or more segments, determining one or more unstable segments changing by more than a predetermined threshold from a prior of the plurality of images, determining one or more segments transitioning from an unstable to a stable segment, determining depth for one or more of the one or more segments that have changed by more than the predetermined threshold, determining depth for one or more of the one or more transitioning segments, and combining the determined depth for the one or more unstable segments and the one or more transitioning segments with a predetermined depth of all segments changing less than the predetermined threshold from the prior of the plurality of images.
    Type: Grant
    Filed: April 23, 2016
    Date of Patent: July 31, 2018
    Assignee: Edge 3 Technologies, Inc.
    Inventors: Tarek El Dokor, Jordan Cluster
  • Patent number: 10032087
    Abstract: The orientation of imagery relative to a compass bearing may be determined based on the position of the sun or other information relating to celestial bodies captured in the image.
    Type: Grant
    Filed: August 18, 2014
    Date of Patent: July 24, 2018
    Assignee: Google LLC
    Inventors: Andrew Gallagher, Shuchang Zhou, Zhiheng Wang
  • Patent number: 10007991
    Abstract: A system for detecting a change in object positioning by processing images by detecting a first entity tagged with a first visually unique identifier, and detecting a second entity tagged with a second visually unique identifier distinguishable from the first visually unique identifier. The system receives a first image, from a first image capturing device, containing the first visually unique identifier and the second visually unique identifier. The system analyzes the first image to determine a distance between the first visually unique identifier and the second visually unique identifier. The system receives a second image containing the first visually unique identifier. The system analyzes the second image to determine a location of the second visually unique identifier relative to the first visually unique identifier to form a distance assessment. Based on the distance assessment, the system determines a change in proximity between the first entity and the second entity.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: June 26, 2018
    Assignee: International Business Machines Corporation
    Inventors: Bruce H. Hyre, Chiao-Fe Shu, Yun Zhai
  • Patent number: 10008001
    Abstract: A method and an apparatus for measuring depth information are provided. In the method, a structured light with a scan pattern is projected by a light projecting device to scan at least one object. Reflected light from the object is detected by a light sensing device, and depth information of each object is calculated according to a deformation of a reflective pattern of the reflected light. Then, images of the object are captured by an image capturing device and used to obtain location information of each object. At least one moving object is found among the objects according to a change of the location information. Finally, at least one of a scan area, a scan frequency, a scan resolution and the scan pattern of the structured light and an order for processing data obtained from scanning is adjusted so as to calculate the depth information of each object.
    Type: Grant
    Filed: August 9, 2016
    Date of Patent: June 26, 2018
    Assignee: Wistron Corporation
    Inventor: Yao-Tsung Chang
  • Patent number: 9997072
    Abstract: In a driving assistance apparatus, a detector is configured to detect whether or not a leading vehicle has started moving, where the leading vehicle is a vehicle just in front of an own vehicle and the own vehicle is a vehicle carrying the apparatus. A notifier is configured to, if it is detected by the detector that the leading vehicle has started moving, provide a notification that the leading vehicle has started moving. An inhibitor is configured to, when the own vehicle has approached a stop point along a road, at which every vehicle must stop before passing therethrough, inhibit the notifier from providing the notification that the leading vehicle has started moving.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: June 12, 2018
    Assignee: DENSO CORPORATION
    Inventor: Jian Hu
  • Patent number: 9996770
    Abstract: A model containing a plurality of model edges is acquired, and a position that matches the model is searched for within a search image. Within the search image, a plurality of edge extraction regions including the respective plurality of model edges when the model is moved to the position searched for are set, and an arithmetic operation of extracting an edge in each of the plurality of edge extraction regions is performed. If there is an edge extraction region where no edge has been able to be extracted, of the model moved to the position searched for, a model edge located in the edge extraction region where no edge has been able to be extracted is incorporated into an edge set, and fitting is performed to the edge set.
    Type: Grant
    Filed: July 26, 2016
    Date of Patent: June 12, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masaki Ishii
  • Patent number: 9986232
    Abstract: A method for distance calibration of machine vision optical targets and associated observing imaging sensors using separation distances between discrete surface measurement points. First, a spatial position for each surface point is established from a series of observations by the associated observing imaging sensor of the optical targets mounted to a calibration fixture. Second, measures of a linear separation distance between particular pairs of surface points are identified independently from the machine vision observations. All resulting spatial positions and measurements are conveyed to a processing system configured with a set of software instructions for carrying out distance calibration calculations to establish a set of refined distance calibration parameters associated with each combination of optical target and observing imaging sensor. During subsequent use of the machine vision vehicle service system, the parameters are retrieved and utilized to improve vehicle measurement precision.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: May 29, 2018
    Assignee: Hunter Engineering Company
    Inventor: Mark S. Shylanski
  • Patent number: 9977782
    Abstract: A system, method, and device for creating an environment and sharing an experience using a plurality of mobile devices having a conventional camera and a depth camera employed near a point of interest. In one form, random crowdsourced images, depth information, and associated metadata are captured near said point of interest. Preferably, the images include depth camera information. A wireless network communicates with the mobile devices to accept the images, depth information, and metadata to build and store a 3D model of the point of interest. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.
    Type: Grant
    Filed: June 17, 2015
    Date of Patent: May 22, 2018
    Inventors: Charles D. Huston, Chris Coleman
  • Patent number: 9971556
    Abstract: An image processing apparatus for coupling to an imaging apparatus that generates a captured image covering substantially a 360-degree field of view and for transmitting an output image to an image forming apparatus includes a setting unit configured to select a type of a polyhedron that is to be constructed by folding a development printed on a medium according to the output image, a converter configured to convert coordinates in the captured image into coordinates in the development that is to be printed on the medium and folded to construct the polyhedron, such that a zenith in the captured image is printed at a topmost point of the polyhedron constructed by folding the development printed on the medium, and an image generator configured to generate the output image based on the converted coordinates.
    Type: Grant
    Filed: March 8, 2017
    Date of Patent: May 15, 2018
    Assignee: Ricoh Company, Ltd.
    Inventors: Aiko Ohtsuka, Atsushi Funami
  • Patent number: 9967535
    Abstract: Various features relating to reducing and/or eliminating noise from images are described. In some embodiments depth based denoising is used on images captured by one or more camera modules based on depth information of a scene area and optical characteristics of the one or more camera modules used to captures the images. In some embodiments by taking into consideration the camera module optics and the depth of the object included in the image portion, a maximum expected frequency can be determined and the image portion is then filtered to reduce or remove frequencies above the maximum expected frequency. In this way noise can be reduced or eliminated from image portions captured by one or more camera modules. The optical characteristic of different camera modules may be different. In some embodiments a maximum expected frequency is determined on a per camera module and depth basis.
    Type: Grant
    Filed: September 18, 2015
    Date of Patent: May 8, 2018
    Assignee: Light Labs Inc.
    Inventors: Rajiv Laroia, Nitesh Shroff, Sapna A Shroff
  • Patent number: 9958267
    Abstract: An apparatus and a method for dual mode depth measurement are provided. The apparatus is used for measuring a depth information of a specular surface in a depth from defocus (DFD) mode or measuring a depth information of a textured surface in a depth from focus (DFF) mode. The apparatus includes a light source, a controller, a processor, a lighting optical system, an imaging optical system, a beam splitter and a camera. The controller is for switching between the depth from defocus mode and the depth from focus mode. The lighting optical system is used to focus a light from the light source on an object surface in the depth from defocus mode, and the lighting optical system is used to illuminate the object surface with a uniform irradiance in the depth from focus mode.
    Type: Grant
    Filed: December 21, 2015
    Date of Patent: May 1, 2018
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Ludovic Angot, Kun-Lung Tseng, Yi-Heng Chou
  • Patent number: 9955141
    Abstract: A portable three-dimensional scanner includes at least two image sensing units and a depth map generation unit. When the portable three-dimensional scanner is moved around an object, a first image sensing unit and a second image sensing unit of the at least two image sensing units capture a plurality of first images comprising the object and a plurality of second images comprising the object, respectively. When the first image sensing unit captures each first image of the plurality of first images, a corresponding distance exists between the portable three-dimensional scanner and the object. The depth map generation unit generates a corresponding depth map according to the each first image and a corresponding second image. A plurality of depth maps generated by the depth map generation unit, the plurality of first images, and the plurality of second images are used for generating a color three-dimensional scan result corresponding to the object.
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: April 24, 2018
    Assignee: eYs3D Microelectronics, Co.
    Inventors: Chao-Chun Lu, Wen-Kuo Lin
  • Patent number: 9953618
    Abstract: Systems and methods for performing localization and mapping with a mobile device are disclosed. In one embodiment, a method for performing localization and mapping with a mobile device includes identifying geometric constraints associated with a current area at which the mobile device is located, obtaining at least one image of the current area captured by at least a first camera of the mobile device, obtaining data associated with the current area via at least one of a second camera of the mobile device or a sensor of the mobile device, and performing localization and mapping for the current area by applying the geometric constraints and the data associated with the current area to the at least one image.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: April 24, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Mahesh Ramachandran, Ashwin Swaminathan
  • Patent number: 9955147
    Abstract: Methods and apparatus for implementing user controlled zoom operations during a stereoscopic, e.g., 3D, presentation are described. While viewing a 3D presentation of a scene environment, a user may switch to a zoom mode allowing the user to zoom in on a particular portion of the environment being displayed. In order to maintain the effect of being physically present at the event, and also to reduce the risk of making the user sick from sudden non-real world like changes in views of the environment, the user in response to initiating a zoom mode of operation is presented with a view which is the same or similar to that which might be expected as the result of looking through a pair of binoculars. In some embodiments the restriction in view is achieved by applying masks to enlarged version of left and right eye views to be displayed.
    Type: Grant
    Filed: January 28, 2016
    Date of Patent: April 24, 2018
    Assignee: NextVR Inc.
    Inventors: David Cole, Alan McKay Moss
  • Patent number: 9947098
    Abstract: A solution for generating a 3D representation of an object in a scene is provided. A depth map representation of the object is combined with a reflectivity map representation of the object to generate the 3D representation of the object. The 3D representation of the object provides more complete and accurate information of the object. An image of the object is illuminated by structured light and is captured. Pattern features rendered in the captured image of the object are analyzed to derive a depth map representation and a reflectivity map representation of the illuminated object. The depth map representation provides depth information while the reflectivity map representation provides surface information (e.g., reflectivity) of the illuminated object. The 3D representation of the object can be enhanced with additional illumination projected onto the object and additional images of the object.
    Type: Grant
    Filed: May 13, 2016
    Date of Patent: April 17, 2018
    Assignee: Facebook, Inc.
    Inventors: Nitay Romano, Nadav Grossinger
  • Patent number: 9942534
    Abstract: An image processing system includes an acquisition unit, a candidate value estimation unit, a cutoff frequency acquisition unit and a candidate value modification unit. The acquisition unit is configured to acquire an image of a sample taken via an optical system. The candidate value estimation unit is configured to estimate a candidate value of a 3D shape of the sample based on the image. The cutoff frequency acquisition unit is configured to acquire a cutoff frequency of the optical system based on information of the optical system. The candidate value modification unit is configured to perform at least one of data correction and data interpolation for the candidate value based on the cutoff frequency and calculate a modified candidate value.
    Type: Grant
    Filed: December 19, 2012
    Date of Patent: April 10, 2018
    Assignee: OLYMPUS CORPORATION
    Inventor: Nobuyuki Watanabe
  • Patent number: 9940717
    Abstract: Techniques related to geometric camera self-calibration quality assessment.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: April 10, 2018
    Assignee: Intel Corporation
    Inventor: Oliver Grau
  • Patent number: 9928430
    Abstract: Methods and system for detecting an object are provided. In one embodiment, a method includes: receiving, by a processor, image data from a single camera, the image data representing an image of scene; determining, by the processor, stixel data from the image data; detecting, by the processor, an object based on the stixel data; and selectively generating, by the processor, an alert signal based on the detected object.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: March 27, 2018
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventor: Dan Levi
  • Patent number: 9924912
    Abstract: Disclosed herein is a method and apparatus for operating an X-ray image processing system of creating a panoramic image based on three-dimensional (3D) Computed Tomography (CT) image data and displaying the panoramic image on the display unit, receiving a part of an object to be measured in the panoramic image through the input unit, and calculating an actual 3D length of the part of the object based on depth information of the CT image data and displaying the actual 3D length on the display unit. The part of the object can be selected by the user to measure a length of a desired part.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: March 27, 2018
    Assignees: VATECH Co., Ltd., VATECH EWOO Holdings Co., Ltd.
    Inventors: Se Yeol Im, Dong Wan Seo, Tae Hee Han