Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 11117535
    Abstract: Aspects of the present disclosure involve projecting an interactive scene onto a surface from a projecting object. In one particular embodiment, the interactive scene is projected from a vehicle and may be utilized by the vehicle to provide a scene or image that a user may interact with through various gestures detected by the system. In addition, the interactive scene may be customized to one or more preferences determined by the system, such as user preferences, system preferences, or preferences obtained through feedback from similar systems. Based on one or more user inputs (such as user gestures received at the system), the projected scene may be altered or new scenes may be projected. In addition, control over some aspects of the vehicle (such as unlocking of doors, starting of the motor, etc.) may be controlled through the interactive scene and the detected gestures of the users.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: September 14, 2021
    Inventors: Daniel E. Potter, Bivin J. Varghese, Christopher P. Child, Mira S. Misra, Clarisse Mazuir, Malcolm J. Northcott, Albert J. Golko, Daniel J. Reetz, Matthew E. Last, Thaddeus Stefanov-Wagner, Christopher J. Sataline, Michael A. Cretella, Collin J. Palmer
  • Patent number: 11120118
    Abstract: Examples of techniques for location validation for authentication are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes presenting, by a processing device, a location-based security challenge to a user. The method further includes responsive to presenting the location-based security challenge to the user, receiving, by the processing device, media from the user. The method further includes validating, by the processing device, the media received from the user against the location-based security challenge to determine whether the user is located at an authorized location. The method further includes responsive to determining that the user is located at an authorized location, authenticating, by the processing device, the user to grant access for the user to a resource.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: September 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mark E. Maresh, Colm Nolan, Juan F. Vargas, Michael J. Whitney
  • Patent number: 11111785
    Abstract: A method and a device for acquiring three-dimensional coordinates of ore based on mining process are disclosed. The method includes: obtaining a two-dimensional coordinate of the ore by using a YOLACT algorithm and a NMS algorithm to obtain a prediction mask map, obtaining depth information of the ore based on the color map and the infrared depth map, and combining the two-dimensional coordinate with the depth information to obtain a three-dimensional coordinate of the ore.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: September 7, 2021
    Assignee: Wuyi University
    Inventors: Junying Zeng, Xuhua Li, Chuanbo Qin, Kaitian Wei, Fan Wang, Xiaowei Jiang, Weizhao He, Junhua He
  • Patent number: 11112531
    Abstract: Provided is a method of creating a longitudinal section along an arbitrary line from three-dimensional point group data of terrain or a structure, and a survey data processing device and a survey system for the same. The method includes (a): setting an arbitrary longitudinal section creation line by sequentially designating a plurality of interval designation points on an X-Y plane of three-dimensional point group data (X, Y, Z), (b): projecting a Z point surveyed between a start point and an end point of a certain interval among a plurality of intervals defined by the interval designation points, onto a vertical virtual plane including the longitudinal section creation line, corresponding to (X, Y) coordinates of the longitudinal section creation line, and (c): performing the step (b) for all of the intervals.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: September 7, 2021
    Assignee: TOPCON CORPORATION
    Inventor: Ryosuke Miyoshi
  • Patent number: 11113548
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating object detection predictions from a neural network. In some implementations, an input characterizing a first region of an environment is obtained. The input includes a projected laser image generated from a three-dimensional laser sensor reading of the first region, a camera image patch generated from a camera image of the first region, and a feature vector of features characterizing the first region. The input is processed using a high precision object detection neural network to generate a respective object score for each object category in a first set of one or more object categories. Each object score represents a respective likelihood that an object belonging to the object category is located in the first region of the environment.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: September 7, 2021
    Assignee: Waymo LLC
    Inventors: Zhaoyin Jia, Ury Zhilinsky, Yun Jiang, Yimeng Zhang
  • Patent number: 11107235
    Abstract: Systems and methods for identifying data suitable for mapping are provided. In some aspects, the method includes receiving one or more images acquired in an area of interest, and selecting at least two ground control points within a field of view of the one or more images. The method also includes determining perceived locations for the at least two ground control points using the one or more images, and computing pairwise distances between the perceived locations and predetermined locations of the at least two ground control points. The method further includes comparing corresponding pairwise distances to identify differences therebetween, and determining a suitability of the one or more images for mapping based on the comparison.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: August 31, 2021
    Assignee: HERE Global B.V.
    Inventors: Jeff Connell, Anish Mittal, David Johnston Lawlor
  • Patent number: 11100662
    Abstract: According to one embodiment, an image processing apparatus includes a memory and one or more hardware processors electrically coupled to the memory. The one or more hardware processors acquire a first image of an object including a first shaped blur and a second image of the object including a second shaped blur. The first image and the second image are acquired by capturing at a time through a single image-forming optical system. The one or more hardware processors acquire distance information to the object based on the first image and the second image, with a statistical model that has learnt previously.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: August 24, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Nao Mishima, Takayuki Sasaki
  • Patent number: 11100344
    Abstract: Systems and methods to perform image-based three-dimensional (3D) lane detection involve obtaining known 3D points of one or more lane markings in an image including the one or more lane markings. The method includes overlaying a grid of anchor points on the image. Each of the anchor points is a center of i concentric circles. The method also includes generating an i-length vector and setting an indicator value for each of the anchor points based on the known 3D points as part of a training process of a neural network, and using the neural network to obtain 3D points of one or more lane markings in a second image.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 24, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Dan Levi, Noa Garnett
  • Patent number: 11087036
    Abstract: A method, device and system for automatically deriving stationing zones for an electronic measuring or marking device in a worksite environment. The method includes querying a database (DB) for a construction plan information for the worksite environment and acquiring a worksite-task-information of a worksite-task to be executed. The worksite-task-information includes spatial points in the construction plan which have to be measured and/or marked to accomplish the worksite-task. It also comprises an acquiring of at least coarse 3D-data of the actual real world worksite environment, and a merging of the at least coarse 3D-data and the construction plan information to form an actual state model of the worksite environment. An automatic calculating of at least one stationing zone within the actual state model is established, the stationing zone including at least one stationing location from which the measuring or marking of the spatial points are accessible by the device without obstructions.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: August 10, 2021
    Assignee: LEICA GEOSYSTEMS AG
    Inventors: Bernd Möller, Thomas Ammer
  • Patent number: 11086015
    Abstract: A system of generating a three-dimensional (3D) scan of an environment includes multiple 3D scanners including a first 3D scanner at respective first and second positions. The system further includes a controller coupled to the 3D scanners. The first 3D scanner acquires a first set of 3D coordinates, the first set of 3D coordinates having a first portion. The second 3D scanner acquires a second set of 3D coordinates, the second set of 3D coordinates having a second portion. The first portion and the second portion are simultaneously transmitted to the controller by the first 3D scanner and the second 3D scanner respectively, while the first set of 3D coordinates and the second set of 3D coordinates are being acquired. The controller registers the first portion and the second portion to each other while the first set of 3D coordinates and the second set of 3D coordinates are being acquired.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: August 10, 2021
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Daniel Pompe, Manuel Caputo, José Gerardo Gómez Méndez, Zia ul Azam, Louis Bergmann, Daniel Flohr, Oliver Zweigle
  • Patent number: 11080536
    Abstract: An image processing device is provided with a communication device and a processor. The processor is configured to acquire a first video obtained by imaging outside scenery of a first vehicle, when the processor detects that a second vehicle appears on the first video, implement image processing that degrades visibility of a video with respect to a first image area corresponding to at least a part of the second vehicle on the first video, when the processor detects that the second vehicle appears on the first video and then a specific part of the second vehicle appears on the first video, end the image processing with respect to the first image area and implement image processing that degrades visibility of a video with respect to a second image area corresponding to the specific part of the second vehicle on the first video.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: August 3, 2021
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Kazuya Nishimura
  • Patent number: 11069049
    Abstract: A division line detection device includes: a processor configured to detect a division line candidate pixel from an image acquired by a camera mounted on a vehicle, set first reliability for a division line candidate pixel whose value representing likelihood that a lane division line is represented is equal to or more than a predetermined threshold value; set second reliability lower than the first reliability for a division line candidate pixel whose value is less than the predetermined threshold value; correct to the first reliability, when a first predetermined number or more of the division line candidate pixels are located on a first scan line having one end at a vanishing point of the image, the second reliability of each division line candidate pixel on the first scan line; and detect a lane division line based on the division line candidate pixels having the first reliability.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: July 20, 2021
    Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, DENSO CORPORATION
    Inventors: Takahiro Sota, Jia Sun, Masataka Yokota
  • Patent number: 11069073
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for on-shelf merchandise detection are provided. One of the methods includes: obtaining a plurality of depth images associated with a shelf from a first camera; obtaining a plurality of images from one or more second cameras associated with each of a plurality of tiers of the shelf; detecting motions of a user's hand comprising reaching into and moving away from the shelf; determining one of the tiers of the shelf associated with the detected motions, a first point of time associated with reaching into the shelf, and a second point of time associated with moving away from the shelf; identifying a first image captured before the first point in time and a second image captured after the second point in time; and comparing the first image and the second image to determine one or more changes to merchandise.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: July 20, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Xiaobo Zhang, Zhangjun Hou, Xudong Yang, Xiaodong Zeng
  • Patent number: 11069077
    Abstract: Systems and methods are disclosed including a computerized system, comprising: a computer system running image display and analysis software that when executed by the computer system causes the computer system to: display an oblique image; reference positional data for the oblique image; create a ground plane for the oblique image, the ground plane comprising a plurality of facets, the facets conforming to a topography of an area captured within the oblique image; receive a selection of at least two pixels within the oblique image; and calculate a desired measurement using the selection of the at least two pixels and the ground plane, the desired measurement taking into account changes within the topography of the area captured within the oblique image.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 20, 2021
    Assignee: Pictometry International Corp.
    Inventors: Stephen L. Schultz, Frank D. Giuffrida, Robert L. Gray, Charles Mondello
  • Patent number: 11064151
    Abstract: An image processing unit identifies the shape of an obstacle that is identified from an area that appears in a peripheral image based on an image captured by a camera. The shape of the obstacle includes at least a tilt of a section of the obstacle in a road-surface direction. The section of the obstacle faces a vehicle. The image processing unit generates a superimposed image in which a mark image that is generated as a pattern that indicates the identified obstacle is superimposed onto a position that corresponds to the obstacle in the peripheral image. At this time, the image processing unit variably changes properties of the mark image based on the tilt of the obstacle identified by an obstacle identifying unit. The image processing unit then displays the generated superimposed image on display apparatus.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: July 13, 2021
    Assignee: DENSO CORPORATION
    Inventors: Yu Maeda, Taketo Harada, Mitsuyasu Matsuura, Hirohiko Yanagawa, Muneaki Matsumoto
  • Patent number: 11052544
    Abstract: A safety device for machine tools, such as a panel-sizing circular saw or an edge-gluing machine, with a machining tool used for machining a workpiece supplied to the machine tool, comprises a detection device and a hazard reduction device. The detection device is designed to detect a hazardous situation of an operator of the machine tool and the hazard reduction device is designed to initiate a safety measure to reduce the risk of injury to the operator when a hazard signal characterizing a hazardous situation of the operator is received.
    Type: Grant
    Filed: October 30, 2019
    Date of Patent: July 6, 2021
    Assignee: Altendorf GmbH
    Inventors: Karl-Friedrich Schröder, Andreas Neufeld, Julia Pohle
  • Patent number: 11055865
    Abstract: A control unit recognizes a direction accepted by an operation unit in a case in which an image acquisition mode is set in an image acquisition device. An image acquisition condition in the image acquisition mode is defined by first information and second information. The first information represents a speed or a distance at which the imaging visual field is changed. The second information represents a timing at which images used for restoration of the three-dimensional shape are acquired. The control unit causes a visual field changing unit to change the imaging visual field at the speed in the recognized direction or change the imaging visual field by the distance in the recognized direction. The control unit acquires at least two images at the timings from an imaging unit.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: July 6, 2021
    Assignee: OLYMPUS CORPORATION
    Inventor: Yohei Sakamoto
  • Patent number: 11048375
    Abstract: The present disclosure relates to methods and systems for providing a multimodal 3D object interaction to let the user interact with 3D digital object in a natural and realistic way.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: June 29, 2021
    Assignee: ALIBABA GROUP HOLDING LIMITED
    Inventors: Paul Fu, Xiaohu Dong, Rui Yang, Chen Zhao, Jin Ryong Kim, Xiangchao Huang, Stephanie Chan, Yu Qin, Puhe Liang, Shenli Yuan
  • Patent number: 11042769
    Abstract: This invention relates to information processing systems and methods in a workplace environment. More particularly, the invention relates to systems and methods for displaying information for use by human users in a workplace environment. Such methods and systems may include an augmented reality mobile device application with voice interactive and other features including user-selectable buttons. Such methods and systems provide rich real-time information to the user via composited media content, overlay imagery, and acoustic speakers. Composited media content may include interactive maps, calendaring functions, navigation information, and tools to assist with management of assignment information. The augmented reality methods and systems facilitate access to various locations and resources at given workplace campus.
    Type: Grant
    Filed: April 11, 2019
    Date of Patent: June 22, 2021
    Assignee: PRO Unlimited Global Solutions, Inc.
    Inventors: Ted Sergott, Brad Martin
  • Patent number: 11023706
    Abstract: A measurement system includes a first distance calculation unit that searches for a corresponding region, indicating a same array as an array of codes indicated by a predetermined number of reference patterns included in a unit region set in the projection pattern, from a set of the codes, and calculates a distance from an irradiation reference surface of the projection pattern to each portion of the object on the basis of a search result of the corresponding region, and a second distance calculation unit that attempts to estimate a distance for the defective portion for which the first distance calculation unit is not able to calculate the distance by reconstructing an incomplete code corresponding to the defective portion using peripheral information in the input image.
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: June 1, 2021
    Assignee: OMRON Corporation
    Inventors: Hitoshi Nakatsuka, Toyoo Iida
  • Patent number: 11016176
    Abstract: A method, device and system for generating a transformation function, mapping position detections of objects in a scene, detected with a positioning device, to a graphical representation of the scene. The teachings enable the detected positions by the positioning device to be mapped to the graphical representation of the monitored scene without the need of previous references to geographical coordinate systems for the positioning device and the graphical representation of the scene. Virtually any type of image may hence be used as a graphical representation of the scene, even hand-drawn sketches.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: May 25, 2021
    Assignee: Axis AB
    Inventors: Aras Papadelis, Peter Henricsson, Mikael Göransson
  • Patent number: 11017552
    Abstract: A measurement method and apparatus are provided. The measurement method is applicable to an image acquisition device, and includes: acquiring image data to generate an image data file (S101); capturing an object to be measured in an image corresponding to the image data file (S102); obtaining a first distance between a horizontal line going through a lowest point of the object to be measured in the image and a horizontal line going through a center point of the image (S103); and calculating a second distance between the object to be measured and the image acquisition device based on the first distance, an installation height of the image acquisition device and a pitch angle of the image acquisition device (S104). Compared with the relevant art, the image acquisition device can measure the distance to an object to be measured while achieving relatively low production cost and easy installation. As a result, actual demands can be better satisfied.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: May 25, 2021
    Assignee: HANGZHOU HIKVISION DIGITAL TECHNOLOGY CO., LTD
    Inventors: Xiaowang Cai, Fei Xiao, Hai Yu, Meng Fan
  • Patent number: 11017595
    Abstract: Improved techniques for performing object segmentation are disclosed. Surface reconstruction (SR) data corresponding to an environment is accessed. This SR data is used to generate a detailed three-dimensional (3D) representation of the environment. The SR data is also used to infer a high-level 3D structural representation of the environment. The high-level 3D structural representation is inferred using machine learning that is performed on the surface reconstruction data to identify a structure of the environment. The high-level 3D structural representation is then cut from the detailed 3D representation. This cutting process generates a clutter mesh comprising objects that remain after the cut and that are distinct from the structure. Object segmentation is then performed on the remaining objects to identify those objects.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 25, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Yuri Pekelny, Rahul Sawhney, Muhammad Jabir Kapasi, Szymon P. Stachniak, Michelle Lynn Brook
  • Patent number: 11017253
    Abstract: Provided are a liveness detection method and apparatus, an electronic device and a storage medium. The method includes: in a case of satisfying a liveness detection starting condition, acquiring (S110) an image frame in a video in real time; recognizing (S120) at least two organ regions of a user in the image frame, and updating a feature value set corresponding to each recognized organ region according to a feature value calculated based on the recognized organ region corresponding to the feature value set; and performing (S130) a liveness detection on the user according to data features in a combination set formed by at least two feature value sets corresponding to the at least two organ regions and extremum conditions respectively corresponding to the at least two feature value sets.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: May 25, 2021
    Assignee: Beijing Bytedance Network Technology Co., Ltd.
    Inventor: Xu Wang
  • Patent number: 11002538
    Abstract: A distance measurement device includes a first acquisition unit configured to acquire distance information on the basis of a plurality of images captured at different viewpoints, a second acquisition configured to acquire correction information of the distance information on the basis of a plurality of images captured at a timing different from the plurality of images used by the first acquisition unit, and a correction unit configured to correct the distance information on the basis of the correction information.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: May 11, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Kazuya Nobayashi
  • Patent number: 10991106
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for on-shelf merchandise detection are provided. One of the methods includes: obtaining a plurality of depth images associated with a shelf from a first camera; obtaining a plurality of images from one or more second cameras associated with each of a plurality of tiers of the shelf; detecting motions of a user's hand comprising reaching into and moving away from the shelf; determining one of the tiers of the shelf associated with the detected motions, a first point of time associated with reaching into the shelf, and a second point of time associated with moving away from the shelf; identifying a first image captured before the first point in time and a second image captured after the second point in time; and comparing the first image and the second image to determine one or more changes to merchandise.
    Type: Grant
    Filed: May 15, 2020
    Date of Patent: April 27, 2021
    Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.
    Inventors: Xiaobo Zhang, Zhangjun Hou, Xudong Yang, Xiaodong Zeng
  • Patent number: 10983530
    Abstract: The present disclosure discloses method and an Electronic Control Unit (ECU) (101) of autonomous vehicle for determining an accurate position. The ECU (101) determines centroid coordinate from Global Positioning System (GPS) points, relative to autonomous vehicle and identifies approximate location and orientation of vehicle on pre-generated map based on centroid coordinate and Inertial Measurement Unit (IMU) data. Distance and direction of surrounding static infrastructure is identified from location and orientation of autonomous vehicle based on road boundaries analysis and data associated with objects adjacent to autonomous vehicle. A plurality of lidar reflection reference points are identified within distance and direction of static infrastructure based on heading direction of autonomous vehicle. Position of lidar reflection reference points are detected from iteratively selected shift positions from centroid coordinate.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: April 20, 2021
    Assignee: Wipro Limited
    Inventors: Manas Sarkar, Balaji Sunil Kumar
  • Patent number: 10984235
    Abstract: Sensing of scene-based occurrences is disclosed. In one example, a vision sensor system comprises (1) dedicated computer vision (CV) computation hardware configured to receive sensor data from at least one sensor array and capable of computing CV features using readings from multiple neighboring sensor pixels and (2) a first processing unit communicatively coupled with the dedicated CV computation hardware. The vision sensor system is configured to, in response to processing of the one or more computed CV features indicating a presence of one or more irises in a scene captured by the at least one sensor array, generate data in support of iris-related operations to be performed by a second processing unit and send the generated data to the second processing unit.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: April 20, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Evgeni Gousev, Liang Shen, Victor Chan, Edwin Chongwoo Park, Xiaopeng Zhang
  • Patent number: 10979648
    Abstract: A method and a stereoscopic apparatus configured to determine an exposure time period for capturing images. The apparatus comprises at least one image capturing device for capturing pairs of images; a processor configured to: calculate a texture-signal-to-noise ratio (TSNR) metric based on information derived from a pair of captured images; calculate an image saturation metric based on that pair of captured images; calculate a value for an exposure duration that will be implemented by the at least one image capturing device when another pair of images are captured; provide the value of the calculated exposure time period to each at least one image capturing device; and wherein the at least one image capturing device is configured to capture at least one image of the target while implementing each the respective calculated value of the exposure time period provided to the corresponding one of the at least one image capturing device.
    Type: Grant
    Filed: November 5, 2019
    Date of Patent: April 13, 2021
    Assignee: INUITIVE LTD.
    Inventors: Gilad Adler, Yaron Rashi
  • Patent number: 10977821
    Abstract: In one or more embodiments, a system for calibration between a camera and a ranging sensor comprises a ranging sensor to obtain ranging measurements for a target located at N number of locations with an emitter at M number of rotation positions. The system further comprises a camera to image the target to generate imaging measurements corresponding to the ranging measurements.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: April 13, 2021
    Assignee: The Boeing Company
    Inventors: Michael B. Schwiesow, Anthony W. Baker
  • Patent number: 10977824
    Abstract: A positioning method and a positioning device are provided. The positioning method includes providing a map database, and obtaining a target image and querying the map database with the target image so as to determine a target coordinate corresponding to the target image.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: April 13, 2021
    Assignee: ACER INCORPORATED
    Inventor: Chia-Cheng Teng
  • Patent number: 10967707
    Abstract: An automatic ventilation system and method for vehicle are disclosed. The system includes: a ventilator to control a ventilation operation of an indoor space of a second vehicle; an input unit to receive information of automatic ventilation for a first vehicle, the automatic ventilation automatically performed based on a change in CO2 concentration inside the first vehicle; and a controller to determine an initiation time point of an automatic ventilation operation of the second vehicle based on the received automatic ventilation information and to control the ventilator to ventilate the indoor space of the second vehicle at the determined initiation time point, thereby pleasantly ventilating an indoor space of the second vehicle with reference to data related to an automatic ventilation operation of first vehicle including a CO2 sensor.
    Type: Grant
    Filed: September 14, 2017
    Date of Patent: April 6, 2021
    Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATION
    Inventors: Jang Yong Lee, Mi Seon Kim, Wan Lee, Kang Ju Cha
  • Patent number: 10967506
    Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: April 6, 2021
    Assignee: X Development LLC
    Inventors: Gary Bradski, Kurt Konolige, Ethan Rublee
  • Patent number: 10964042
    Abstract: A detection device including: a detector that detects an object from one viewpoint; a reliability calculator that calculates reliability information on the object at the one viewpoint by using a detection result of the detector; and an information calculator that calculates shape information on the object at the one viewpoint by using the detection result of the detector and the reliability information and calculates texture information on the object at the one viewpoint by using the detection result, the information calculator generates model information on the object at the one viewpoint based on the shape information and the texture information.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: March 30, 2021
    Assignee: NIKON CORPORATION
    Inventors: Takeaki Sugimura, Yoshihiro Nakagawa
  • Patent number: 10964040
    Abstract: An operating method includes generating a first depth map by at least a first depth capture device, generating a second depth map by at least a second depth capture device, performing image registration on the first depth map with the second depth map to obtain transformed coordinates in the second depth map corresponding to pixels in the first depth map, and aligning depth data of the first depth map and depth data of the second depth map to generate an optimized depth map according to the transformed coordinates in the second depth map corresponding to the pixels in the first depth map.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: March 30, 2021
    Assignee: ArcSoft Corporation Limited
    Inventors: Hao Sun, Jian-Hua Lin, Chung-Yang Lin
  • Patent number: 10939043
    Abstract: An image pickup apparatus capable of performing a zooming function without increasing a thickness of the image pickup apparatus, and obtaining depth information at the same time by using lens elements having different diameters, is provided. The image pickup apparatus includes lens elements, and image pickup regions respectively disposed in correspondence to the lens elements. At least two of the lens elements have different diameters. At least two of the image pickup regions have different sizes. A smallest image pickup region among the image pickup regions having a smallest size among sizes of the image pickup regions is disposed with respect to a lens element among the lens elements having a largest diameter. A largest image pickup region among the image pickup regions having a largest size among sizes of the image pickup regions is disposed with respect to a lens element among the lens elements having a smallest diameter.
    Type: Grant
    Filed: September 26, 2018
    Date of Patent: March 2, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Kyong-tae Park
  • Patent number: 10927515
    Abstract: A self-propelled milling machine includes a controller which continuously locates an alterable position of a loading surface and of a slewable transport conveyor relative to a machine frame, or the position of the loading surface relative to the transport conveyor, and automatically controls one or more of the slewing angle, the elevation angle and the conveying speed of the transport conveyor, wherein discharged milling material impinges on pre-calculated points of impingement within the loading surface. The controller determines correction factors for the control parameter(s) as a function of a transverse inclination about the longitudinal central axis of the loading surface, a position angle between the longitudinal central axis of the loading surface and the longitudinal central axis of the transport conveyor or that of the machine frame, and/or the position of the pre-calculated point of impingement relative to an end of the loading surface lying on the longitudinal central axis.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: February 23, 2021
    Inventors: Cyrus Barimani, Christian Berning, Tobias Krista, Bernd Walterscheid
  • Patent number: 10930000
    Abstract: A method includes obtaining a disparity map based on stereoscopic image frames captured by stereoscopic cameras borne on a movable platform, determining a plurality of continuous regions in the disparity map that each includes a plurality of elements having disparity values within a predefined range, identifying a continuous sub-region including one or more elements having a highest disparity value among the elements within each continuous region as an object, and determining a distance between the object and the movable platform using at least the highest disparity value.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: February 23, 2021
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Ang Liu, Pu Xu
  • Patent number: 10930059
    Abstract: A method and an apparatus for processing a 3D scene are disclosed. A reference image representative of an image of the scene captured under ambient lighting is determined. A texture-free map is determined from said reference image and an input image of the scene. The 3D scene is then processed using the determined texture-free map.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: February 23, 2021
    Assignee: INTERDIGITAL CE PATENT HOLDINGS, SAS
    Inventors: Salma Jiddi, Gregoire Nieto, Philippe Robert
  • Patent number: 10917627
    Abstract: A method and system for capturing and generating a 3 dimensional image of a target using a single camera. The system has at least one light source which intensity is able to be adjusted, at least one image recorder that captures images of a target that has a plurality of obstacles, and at least one control unit not shown in figures, that controls the light source by increasing or decreasing its intensity within a time period, that controls the image recorder so as to capture a plurality of images of said target within said time period, and that determines the depth of the obstacles so as to capture and generate a 3 dimensional image by comparing the illumination level change of the obstacles between captured images within the time period.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: February 9, 2021
    Inventor: Utku Buyuksahin
  • Patent number: 10915783
    Abstract: An imaging device may capture images of a scene, where the scene includes retroreflective materials. Where visual images and depth images are captured from a scene, and the depth images have ratios of supersaturated pixels that are less than a predetermined threshold, a location map of the scene is generated or updated based on the depth images. Where the ratios are greater than the predetermined threshold, the location map of the scene is generated or updated based on the visual images. Additionally, where each of a plurality of imaging devices detect concentrations of supersaturated pixels beyond a predetermined threshold or limit within their respective fields of view, an actor present on the scene may be determined to be wearing retroreflective material, or otherwise designated as a source of the supersaturation, and tracked with the scene based on coverage areas that are determined to have excessive ratios of supersaturated pixels.
    Type: Grant
    Filed: December 14, 2018
    Date of Patent: February 9, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Samuel Nathan Hallman, Petko Tsonev, Michael Francis O'Malley, Jayakrishnan Eledath, Jue Wang, Tian Lan
  • Patent number: 10909395
    Abstract: An object detection apparatus is provided with: an imager configured to image surroundings of a subject vehicle and to obtain a surrounding image; an object detector configured to detect an interested-object from the surrounding image and to output first image coordinates, which indicate a position of the detected interested-object on the surrounding image; a calculator configured to associate the interested-object with one or more coordinate points out of a plurality of coordinate points, each of which indicates three-dimensional coordinates of respective one of a plurality of points on a road, on the basis of the first image coordinates and a position of the subject vehicle, and configured to calculate at least one of a position of the interested-object on a real space and a distance to the interested-object from the subject vehicle on the basis of the position of the subject vehicle and the one or more coordinate points associated.
    Type: Grant
    Filed: April 5, 2018
    Date of Patent: February 2, 2021
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Mineki Soga
  • Patent number: 10891481
    Abstract: Automated detection of features and/or parameters within an ocean environment using image data. In an embodiment, captured image data is received from ocean-facing camera(s) that are positioned to capture a region of an ocean environment. Feature(s) are identified within the captured image data, and parameter(s) are measured based on the identified feature(s). Then, when a request for data is received from a user system, the requested data is generated based on the parameter(s) and sent to the user system.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: January 12, 2021
    Assignee: SURFLINEWAVETRAK, INC.
    Inventor: Benjamin Freeston
  • Patent number: 10885761
    Abstract: In an example, an apparatus includes a first sensor, a second sensor, and an integrated management system. The first sensor is for capturing a first set images of a calibration target that is placed in a monitored site, wherein the first sensor has a first position in the monitored site, and wherein a physical appearance of the calibration target varies when viewed from different positions within the monitored site. The second sensor is for capturing a second set of images of the calibration target, wherein the second sensor has a second position in the monitored site that is different from the first position. The integrated management system is for determining a positional relationship of the first sensor and the second sensor based on the first set of images, the second set of images, and knowledge of the physical appearance of the calibration target.
    Type: Grant
    Filed: October 3, 2018
    Date of Patent: January 5, 2021
    Assignee: Magik Eye Inc.
    Inventor: Akiteru Kimura
  • Patent number: 10885662
    Abstract: This application provides a depth-map-based ground detection method and apparatus. The method includes: screening first sample points according to depth values of points in a current depth map; determining space coordinates of the first sample points, and determining space heights of the first sample points according to current gravity acceleration information and the space coordinates; screening second sample points in the first sample points according to the space heights; determining an optimal ground equation according to space coordinates of the second sample points; and determining a ground point in the depth map by using the optimal ground equation. Because the accuracy of the sample points that undergo secondary screening is high, the ground detection precision is high. Because this application is not excessively dependent on a depth value, the calculation complexity is low, and operation can be performed on various hardware.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: January 5, 2021
    Assignee: Beijing Hjimi Technology Co., Ltd
    Inventors: Hang Wang, Zan Sheng, Shuo Li, Xiaojun Zhou, Li Li
  • Patent number: 10874948
    Abstract: A method of mapping a virtual environment includes: obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; where for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: December 29, 2020
    Assignee: Sony Interactive Entertainment Europe Limited
    Inventors: Nicholas Anthony Edward Ryan, Hugh Alexander Dinsdale Spencer, Andrew Swann, Simon Andrew St John Brislin, Pritpal Singh Panesar
  • Patent number: 10878590
    Abstract: Stereo image reconstruction can be achieved by fusing a plurality of proposal cost volumes computed from a pair of stereo images, using a predictive model operating on pixelwise feature vectors that include disparity and cost values sparsely sampled form the proposal cost volumes to compute disparity estimates for the pixels within the image.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: December 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sudipta Narayan Sinha, Marc André Léon Pollefeys, Johannes Lutz Schönberger
  • Patent number: 10877155
    Abstract: A survey data processing device includes a panoramic image data receiving unit, a point cloud data receiving unit, a similar part designation receiving unit, and a correspondence relationship determining unit. The panoramic image data receiving unit receives first and second panoramic images that are respectively obtained at a first point of view and a second point of view. The point cloud data receiving unit receives first point cloud data that is obtained by a first laser scanner and receives second point cloud data that is obtained by a second laser scanner. The similar part designation receiving unit receives designation of a part that is the same or similar between the first and second panoramic images. The correspondence relationship determining unit determines a correspondence relationship between the first and second point cloud data on the basis of the first and second point cloud data corresponding to the same or similar part.
    Type: Grant
    Filed: September 18, 2019
    Date of Patent: December 29, 2020
    Assignee: TOPCON CORPORATION
    Inventors: Daisuke Sasaki, Takahiro Komeichi
  • Patent number: 10878621
    Abstract: Exemplary embodiments of the present disclosure provide a method, apparatus and computer readable storage medium for creating a map and positioning a moving entity. A method for creating a map includes acquiring an image acquired when an acquisition entity is moving and location data and point cloud data associated with the image, the location data indicating a location where the acquisition entity is located when the image is acquired, the point cloud data indicating three-dimensional information of the image. The method further includes generating a first element in a global feature layer of the map based on the image and the location data. The method further includes generating a second element in a local feature layer of the map based on the image and the point cloud data, the first element corresponding to the second element.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: December 29, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Wang Zhou, Miao Yan, Yifei Zhan, Xiong Duan, Changjie Ma, Xianpeng Lang, Yonggang Jin
  • Patent number: 10861165
    Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: December 8, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Erroll William Wood, Michael Bleyer, Christopher Douglas Edmonds, Michael Scott Fenton, Mark James Finocchio, John Albert Judnich