Range Or Distance Measuring Patents (Class 382/106)
-
Patent number: 10529082Abstract: A three-dimensional (3D) geometry measurement apparatus includes a projection part, a capturing part that generates a captured image of an object to be measured to which a projection image is projected, an analyzing part that obtains correspondences between projection pixel positions that are pixel positions of the projection image and captured pixel positions that are pixel positions of the captured image, a line identification part that identifies a first epipolar line of the capturing part corresponding to the captured pixel positions or a second epipolar line of the projection part corresponding to the projection pixel positions, a defective pixel detection part that detects defective pixels based on a positional relationship between the projection pixel positions and the first epipolar line or a positional relationship between the projection pixel positions and the second epipolar line, and a geometry identification part.Type: GrantFiled: June 8, 2018Date of Patent: January 7, 2020Assignee: MITUTOYO CORPORATIONInventor: Kaoru Miyata
-
Patent number: 10529083Abstract: A method for estimating distance of an object from a moving vehicle is provided. The method includes detecting, by a camera module in one or more image frames, an object on a road on which the vehicle is moving. The method includes electronically determining a pair of lane markings associated with the road. The method further includes electronically determining a lane width between the pair of the lane markings in an image coordinate of the one or more image frames. The lane width is determined at a location of the object on the road. The method includes electronically determininga real world distance of the object from the vehicle based at least on number of pixels corresponding to the lane width in the image coordinate, a pre-defined lane width associated with the road and at least one camera parameter of the camera module.Type: GrantFiled: December 6, 2017Date of Patent: January 7, 2020Assignee: Lighmetrics Technologies Pvt. Ltd.Inventors: Mithun Uliyar, Ravi Shenoy, Soumik Ukil, Krishna A G, Gururaj Putraya, Pushkar Patwardhan
-
Patent number: 10509983Abstract: A technique for efficiently calibrating a camera is provided. Reference laser scan data is obtained by scanning a building 131 by a laser scanner 115, which is fixed on a vehicle 100 and has known exterior orientation parameters, while the vehicle 100 travels. An image of the building 131 is photographed at a predetermined timing by an onboard camera 113. Reference point cloud position data, in which the reference laser scan data is described in a coordinate system defined on the vehicle 100 at the predetermined timing, is calculated based on the trajectory the vehicle 100 has traveled. Matching points are selected between feature points in the reference point cloud position data and in the image. Exterior orientation parameters of the camera 113 are calculated based on relative relationships between the reference point cloud position data and image coordinate values in the image of the matching points.Type: GrantFiled: September 2, 2015Date of Patent: December 17, 2019Assignee: KABUSHIKI KAISHA TOPCONInventors: You Sasaki, Tadayuki Ito
-
Patent number: 10489639Abstract: Methods, apparatus and systems for recognizing sign language movements using multiple input and output modalities. One example method includes capturing a movement associated with the sign language using a set of visual sensing devices, the set of visual sensing devices comprising multiple apertures oriented with respect to the subject to receive optical signals corresponding to the movement from multiple angles, generating digital information corresponding to the movement based on the optical signals from the multiple angles, collecting depth information corresponding to the movement in one or more planes perpendicular to an image plane captured by the set of visual sensing devices, producing a reduced set of digital information by removing at least some of the digital information based on the depth information, generating a composite digital representation by aligning at least a portion of the reduced set of digital information, and recognizing the movement based on the composite digital representation.Type: GrantFiled: January 25, 2019Date of Patent: November 26, 2019Assignee: AVODAH LABS, INC.Inventors: Michael Menefee, Dallas Nash, Trevor Chandler
-
Patent number: 10482848Abstract: One or more processors receive a map image file. The map image file includes geographical data. One or more processors convert the map image file into a raster map image file. The raster map image file includes one or more first raster images. The one or more first raster images include a first plurality of pixels. One or more processors label one or more of the first plurality of pixels with a first set of geographical coordinates using at least a portion of the geographical data included in the map image file.Type: GrantFiled: August 7, 2015Date of Patent: November 19, 2019Assignee: International Business Machines CorporationInventor: Rui He
-
Patent number: 10480934Abstract: An apparatus sequentially acquires, from a plurality of reference imaging devices for imaging a silhouette imaged with a base imaging device from viewpoints different from a viewpoint of the base imaging device, silhouette existing position information based on the reference imaging devices, and transforms the silhouette existing position information into a common coordinate system, where the silhouette existing position information indicates an existing position of the silhouette. The apparatus detects a silhouette absence range in which the silhouette does not exist, based on a result of comparison of the silhouette existing position information acquired this time and the silhouette existing position information acquired last time, and searches a range in which the silhouette exists, based on the silhouette absence range.Type: GrantFiled: December 5, 2018Date of Patent: November 19, 2019Assignee: FUJITSU LIMITEDInventors: Tomonori Kubota, Yasuyuki Murata, Masahiko Toichi
-
Patent number: 10479376Abstract: A self-driving vehicle (SDV) can operate by analyzing sensor data to autonomously control acceleration, braking, and steering systems of the SDV along a current route. The SDV includes a number of sensors generating the sensor data and a control system to detect conditions relating to the operation of the SDV, such as vehicle speed and local weather, select a set of sensors based on the detected conditions, and prioritize the sensor data generated from the selected set of sensors to control aspects relating to the operation of the SDV.Type: GrantFiled: March 23, 2017Date of Patent: November 19, 2019Assignee: UATC, LLCInventors: Eric Meyhofer, David Rice, Scott Boehmke, Carl Wellington
-
Patent number: 10473766Abstract: A LiDAR system and scanning method creates a two-dimensional array of light spots. A scan controller causes the array of light spots to move back and forth so as to complete a scan of the scene. The spots traverse the scene in the first dimensional direction and in the second dimensional direction without substantially overlapping points in the scene already scanned by other spots in the array. An arrayed micro-optic projects the light spots. Receiver optics includes an array of optical detection sites. The arrayed micro-optic and the receiver optics are synchronously scanned while maintaining a one-to-one correspondence between light spots in the two dimensional array and optical detection sites in the receiver optics.Type: GrantFiled: June 27, 2017Date of Patent: November 12, 2019Assignee: The Charles Stark Draper Laboratory, Inc.Inventor: Steven J. Spector
-
Patent number: 10466715Abstract: An apparatus for controlling narrow road driving of a vehicle includes: an image transform unit generating a depth map using depth information of an object in a front image of a road on which the vehicle travels and generating a height map of the front image by transforming the generated depth map; a map analysis unit recognizing the object and calculating a driving allowable area of the road based on the generated height map; a determination unit determining whether the road is a narrow road based on the calculated driving allowable area and, when the road is determined to be the narrow road, determining whether the vehicle is able to pass through the narrow road; and a signal processing unit controlling driving of the vehicle on the narrow road based on the determination of whether the vehicle is able to pass through the narrow road.Type: GrantFiled: July 20, 2017Date of Patent: November 5, 2019Assignees: Hyundai Motor Company, Kia Motors Corporation, Industry-University Cooperation Foundation Hanyang UniversityInventors: Byung Yong You, Jong Woo Lim, Keon Chang Lee
-
Patent number: 10452936Abstract: Exemplary embodiments are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module. The illumination sources are configured to illuminate at least a portion of a face of a subject. The camera is configured to capture one or more images of the subject during illumination of the face of the subject. The analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing.Type: GrantFiled: July 27, 2017Date of Patent: October 22, 2019Assignee: Princeton IdentityInventors: Barry E. Mapen, David Alan Ackerman, Michael J. Kiernan
-
Patent number: 10440347Abstract: Depth information can be used to assist with image processing functionality, such as to generate modified image data including foreground image data and background image data having different amounts of blur. In at least some embodiments, depth information obtained from first image data associated with a first sensor and second image data associated with a second sensor, for example, can be used to determine the foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images and merging the adjusted subsequent frames or images. Such an approach provides modified image data including a foreground object having a first amount of blur (e.g., a lesser amount of blur) and/or background object(s) having a second amount of blur (e.g.Type: GrantFiled: January 2, 2017Date of Patent: October 8, 2019Assignee: AMAZON TECHNOLOGIES, INC.Inventor: Dong Zhou
-
Patent number: 10434649Abstract: A workpiece pick up system including: a three-dimensional sensor which is placed at an upper side of a container and which obtains a group of three-dimensional points each of which has height position information in the container, a group creating means which creates a plurality of three-dimensional point groups in each of which adjacent points satisfy a predetermined condition, an exclusion group determining means which determines that one or more three-dimensional point groups which satisfy at least one of a predetermined size reference, a predetermined area reference, and a predetermined length reference are excluded groups, and a workpiece detecting means which obtains a group of detection-purpose three-dimensional points for detecting workpieces by excluding points included in the excluded group from the group of three-dimensional points or the plurality of three-dimensional point groups, and which detects the workpieces to be picked up by using the group of detection-purpose three-dimensional points.Type: GrantFiled: February 15, 2018Date of Patent: October 8, 2019Assignee: Fanuc CorporationInventor: Toshiyuki Ando
-
Patent number: 10430659Abstract: Embodiments of the present disclosure disclose a method and apparatus for urban road recognition based on a laser point cloud. The method comprises: constructing a corresponding road edge model according to the laser point cloud acquired by a laser sensor; determining a height of a mobile carrier provided with the laser sensor and constructing a corresponding road surface model based on the height and the laser point cloud; eliminating a road surface point cloud and a road edge point cloud in the laser point cloud according to the road edge model and the road surface model, segmenting a remaining laser point cloud using a point cloud segmentation algorithm, and recognizing an object corresponding to a segmenting result.Type: GrantFiled: December 8, 2015Date of Patent: October 1, 2019Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.Inventors: Yu Jiang, Yang Yan
-
Patent number: 10430047Abstract: In some examples, an electronic device may reduce the resolution or otherwise downsize content items to conserve storage space on the electronic device. Further, the electronic device may offload full resolution versions of content items that have been downsized, and the full resolution versions may be stored at a cloud storage or other network storage location. Subsequently, if the user, an operating system module, or an application on the electronic device requests a higher resolution version of the downsized content item, the higher resolution version may be downloaded from the network storage to the electronic device. Various techniques may be used for determining a size or resolution of the content item to download from the network storage.Type: GrantFiled: August 26, 2015Date of Patent: October 1, 2019Assignee: Razer (Asia-Pacific) Pte. Ltd.Inventors: Michael A. Chan, Justin Quan, Brian Chu, Aanchal Jain
-
Patent number: 10422879Abstract: A time-of-flight distance measuring device divides a base exposure period into a plurality of sub exposure periods and holds without resetting an electric charge stored in the sub exposure period for a one round period which is one round of the plurality of sub exposure periods. The distance measurement value of short time exposure is acquired during the one round period and the distance measurement value of long time exposure is acquired during a plurality of the one round periods. Both of the distance measurement value of the long time exposure and the distance measurement value of the short time exposure can be acquired from the same pixel. With this, a dynamic range is expanded without being restricted by a receiving state of reflected light, optical design of received light, and an arrangement of pixels.Type: GrantFiled: November 12, 2015Date of Patent: September 24, 2019Assignee: DENSO CORPORATIONInventor: Toshiaki Nagai
-
Patent number: 10410054Abstract: An image processing method causing an image processing device to execute a process including: obtaining a first image and a second image captured at different timings for an identical inspection target by passing through an imaging range of an image sensor row; extracting respective feature points of the first image and the second image; associating the feature points of the first image and the feature points of the second image with each other; estimating a conversion formula to convert the feature point of the second image to the feature point of the first image based on a restraint condition of a quadratic equation, in accordance with respective coordinates of three or more sets of the feature points associated between the first image and the second image; and converting the second image into a third image corresponding to the first image based on the estimated conversion formula.Type: GrantFiled: September 29, 2017Date of Patent: September 10, 2019Assignee: FUJITSU LIMITEDInventors: Yusuke Nonaka, Eigo Segawa
-
Patent number: 10401867Abstract: A method for autonomously controlling a feed mixing vehicle, a vehicle having an autonomous controller, a computer program for carrying out the method, and a control device. The vehicle has a chassis, working elements for carrying out partial tasks, scanners/sensors for transmitting data, and a computer for controlling all the processes. The scanners/sensors acquire spatial data of the surroundings and generate therefrom a 3D map of the current geometry of the surroundings. The current geometry of the surroundings is placed in relationship with an area that is released to be traveled on by the autonomous vehicle, with the result that the navigability of the travel path of the autonomous vehicle is checked, and in the case of detected non-navigability the travel path is adapted autonomously to the requirements of the situational spatial surroundings and is replaced by an alternative travel path.Type: GrantFiled: November 17, 2015Date of Patent: September 3, 2019Assignee: B. Strautmann & Söhne GmbH u. Co. KGInventors: Wolfgang Strautmann, Johannes Marquering, Andreas Trabhardt
-
Patent number: 10403014Abstract: An image processing apparatus includes a setting unit setting the number of pieces of image data to be selected, an identifying unit identifying, based on a photographing date and time of each piece of image data of an image data group, a photographing period of the image data group, a dividing unit dividing the identified photographing period into a plurality of photographing sections, a selection unit selecting image data from an image data group corresponding to a target photographing section based on predetermined criteria, and a generation unit generating a layout image by arranging an image based on the selected image data, wherein selection of image data is repeated by setting an unselected photographing section as a next target photographing section to select a number of pieces of image data corresponding to the set number, and wherein the number of photographing sections is determined according to the set number.Type: GrantFiled: August 3, 2016Date of Patent: September 3, 2019Assignee: Canon Kabushiki KaishaInventors: Hiroyasu Kunieda, Masaaki Obayashi, Yoshinori Mizoguchi, Fumitaka Goto, Masao Kato, Maya Kurokawa
-
Patent number: 10399233Abstract: A robot includes: a base; a robot arm rotatably provided around a rotation axis relative to the base; a mark which rotates in accordance with rotation of the robot arm; a capturing element which captures the mark; a memory which stores a reference image therein; and a determination section which determines a rotation state of the robot arm by template matching by subpixel estimation using the reference image and an image captured by the capturing element, in which a relationship of 2R/B?L/X?100R/B is satisfied when a viewing field size per one pixel of the capturing element is B, a distance between the rotation axis and the center of the mark is R, the maximum distance between the rotation axis and a tip of the robot arm is L, and repetition positioning accuracy of the tip of the robot arm is X.Type: GrantFiled: January 17, 2018Date of Patent: September 3, 2019Assignee: Seiko Epson CorporationInventor: Daiki Tokushima
-
Patent number: 10402676Abstract: An automated method performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising: classifying first data points identifying at least one man-made roof structure within a point cloud and classifying second data points associated with at least one of natural structures and ground surface to form a modified point cloud; identifying at least one feature of the man-made roof structure in the modified point cloud; and generating a roof report including the at least one feature.Type: GrantFiled: February 9, 2017Date of Patent: September 3, 2019Assignee: Pictometry International Corp.Inventors: Yandong Wang, Frank Giuffrida
-
Patent number: 10393536Abstract: A probe information storing unit is provided to store therein probe information items of a vehicle, and a voxel storing unit is provided to store therein voxels, which are defined and later described, in association with position information of the respective voxels. The voxels are defined in a three-dimensional space based on map data. The voxel storing unit and the probe information storing unit are referred to, and the probe information items are given, as votes, to the voxels that correspond to position information of the respective probe information items. A statistical process is executed to the probe information items given to each of the voxels, and the process results are associated with the respective voxels.Type: GrantFiled: January 29, 2018Date of Patent: August 27, 2019Assignee: TOYOTA MAPMASTER INCORPORATEDInventors: Naoki Kitagawa, Yumiko Yamashita, Yoshihiro Ui
-
Patent number: 10390057Abstract: Reception-side processing performed in a case where transmission of standard dynamic range video data and transmission of high dynamic range video data coexist in a time sequence is simplified. SDR transmission video data is converted into SDR transmission video data through dynamic range conversion. The SDR transmission video data is the one obtained by performing, on SDR video data, photoelectric conversion in accordance with an SDR photoelectric conversion characteristic. In this case, the conversion is performed on the basis of conversion information for converting a value of conversion data in accordance with the SDR photoelectric conversion characteristic into a value of conversion data in accordance with an HDR photoelectric conversion characteristic. A video stream is obtained by performing encoding processing on HDR transmission video data. A container having a predetermined format and including this video stream is transmitted.Type: GrantFiled: February 9, 2016Date of Patent: August 20, 2019Assignee: SONY CORPORATIONInventor: Ikuo Tsukagoshi
-
Patent number: 10384609Abstract: A vehicle hitch assistance system includes first and second cameras on an exterior of the vehicle and an image processor. The image processor is programmed to identify an object in image data received from the first and second cameras and determine a height and a position of the object using the image data. The system further includes a controller outputting a steering command to a vehicle steering system to selectively guide the vehicle away from or into alignment with the object.Type: GrantFiled: June 20, 2017Date of Patent: August 20, 2019Assignee: Ford Global Technologies, LLCInventors: Yi Zhang, Erick Michael Lavoie
-
Patent number: 10371818Abstract: A system and method for forming an image of a target with a laser detection and ranging system. The system includes a laser transmitter and an array detector. The method includes transmitting a sequence of laser pulses; forming a plurality of point clouds, each point cloud corresponding to a respective transmitted laser pulse, each point in the point cloud corresponding to a point on a surface of the target; grouping the plurality of point clouds into a plurality of point cloud groups according to a contiguous subset of the sequence of laser pulses; forming a plurality of average point clouds, each of the average point clouds being the average of a respective group of the plurality of point cloud groups; and forming a first estimate of a six-dimensional velocity of the target, including three translational velocity components and three angular velocity components, from the plurality of average point clouds.Type: GrantFiled: April 18, 2017Date of Patent: August 6, 2019Assignee: RAYTHEON COMPANYInventors: Eran Marcus, Vitaliy M. Kaganovich
-
Patent number: 10360449Abstract: The systems may include dividing a digital map provided by a mapping system into a matrix having a plurality of cells; assigning a cell of the plurality of cells to encompass a geographic region of the digital map; calculating a number of sites of interest in the cell; creating a marker comprising a first count number representing the number of sites of interest in the cell; and sharing the marker with a browser for display on the digital map.Type: GrantFiled: December 22, 2016Date of Patent: July 23, 2019Assignee: AMERICAN EXPRESS TRAVEL RELATED SERVICES COMPANY, INC.Inventors: Shivakumar Chandrashekar, Raju Rathi, Yogesh Tayal, Kunal Upadhyay, Purushotham Vunnam
-
Patent number: 10360690Abstract: In accordance with an embodiment, an information processing apparatus includes an image pickup device, a storage device, an operation device, and a processor. The operation device receives selection as to whether to register a feature amount of a reference commodity in a first dictionary file or register a feature amount of an exclusion object in a second dictionary file. The processor extracts a feature amount of an object included in a picked-up image output by the image pickup device. Further, the processor registers the extracted feature amount in the first dictionary file or the second dictionary file in accordance with the selection received by the operation device.Type: GrantFiled: December 12, 2017Date of Patent: July 23, 2019Assignee: Toshiba TEC Kabushiki KaishaInventor: Hitoshi Iizaka
-
Patent number: 10356380Abstract: An apparatus and a method for acquiring depth information are disclosed. To acquire depth information, illumination light of which an amount of light has been modulated by a modulation signal is emitted towards a subject, and an image is captured using an image sensor. An image signal is sequentially acquired from a plurality of rows while shifting a phase of the illumination light at a start exposure time of a row belonging to an intermediate region of a pixel array region of the image sensor. Depth information is calculated from image signals acquired during a plurality of frames while shifting a phase of the modulation signal.Type: GrantFiled: April 28, 2016Date of Patent: July 16, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jangwoo You, Yonghwa Park, Heesun Yoon, Myungjae Jeon
-
Patent number: 10338221Abstract: A device for extracting depth information according to one embodiment of the present invention comprises: a light outputting unit for outputting IR (InfraRed) light; a light inputting unit for inputting light reflected from an object after outputting from the light outputting unit; a light adjusting unit for adjusting the angle of the light so as to radiate the light into a first area including the object, and then for adjusting the angle of the light so as to radiate the light into a second area; and a controlling unit for estimating the motion of the object by using at least one of the lights between the light inputted to the first area and the light inputted to the second area.Type: GrantFiled: January 28, 2015Date of Patent: July 2, 2019Assignee: LG INNOTEK CO., LTD.Inventors: Myung Wook Lee, Sung Ki Jung, Gi Seok Lee, Kyung Ha Han, Eun Sung Seo, Se Kyu Lee
-
Patent number: 10325357Abstract: Embodiments of the present application provide image processing methods and apparatus. A image processing method disclosed herein comprises: acquiring, from an image, two regions which have a textural similarity higher than a first value and have different depths; performing frequency-domain conversion on each of the regions, to obtain a frequency-domain signal of each region; and optimizing the image at least according to the frequency-domain signal of each region, the depth of each region and a focusing distance of the image.Type: GrantFiled: November 20, 2015Date of Patent: June 18, 2019Assignee: BEIJING ZHIGUGUI TUO TECH CO., LTD.Inventors: Hanning Zhou, Jia Liu
-
Patent number: 10325381Abstract: A method of localizing portable apparatus (100) in an environment, the method comprising obtaining captured image data representing an image captured by an imaging device (106) associated with the portable apparatus, and obtaining mesh data representing a 3-dimensional textured mesh of at least part of the environment. The mesh data is processed to generate a plurality of synthetic images, each synthetic image being associated with a pose within the environment and being a simulation of an image that would be captured by the imaging device from that associated pose. The plurality of synthetic images is analyzed to find a said synthetic image similar to the captured image data, and an indication is provided of a pose of the portable apparatus within the environment based on the associated pose of the similar synthetic image.Type: GrantFiled: July 17, 2015Date of Patent: June 18, 2019Assignee: The Chancellor Masters and Scholars of The University of OxfordInventors: William Paul Maddern, Alexander Douglas Stewart, Paul Michael Newman, Geoffrey Michael Pascoe
-
Patent number: 10309774Abstract: The invention provides a surveying instrument, which comprises a first optical axis deflecting unit disposed on a projection optical axis of a distance measuring light for deflecting an optical axis of the distance measuring light at a deflection angle and in a direction as required, a second optical axis deflecting unit disposed on a light receiving optical axis for deflecting the reflected distance measuring light at the same deflection angle and in the same direction as the first optical axis deflecting unit and a projecting direction detecting unit for detecting a deflection angle and a deflecting direction by the first optical axis deflecting unit, wherein the distance measuring light is projected through the first optical axis deflecting unit and the reflected distance measuring light is received by the photodetection element through the second optical axis deflecting unit.Type: GrantFiled: August 29, 2018Date of Patent: June 4, 2019Assignee: Kabushiki Kaisha TOPCONInventors: Fumio Ohtomo, Kaoru Kumagai
-
Patent number: 10298913Abstract: Scanning apparatus includes a base and a gimbal, including a shaft that fits into rotational bearings in the base and is configured to rotate through 360° about a gimbal axis relative to the base. A mirror assembly, fixed to the gimbal, includes a mirror, which is positioned on the gimbal axis and is configured to rotate about a mirror axis perpendicular to the gimbal axis. A transmitter directs pulses of optical radiation toward the mirror, which directs the optical radiation toward a scene. A receiver, receives, via the mirror, the optical radiation reflected from the scene and outputs signals in response to the received radiation. Control circuitry drives the gimbal to rotate about the gimbal axis and the mirror to rotate about the mirror axis, and processes the signals output by the receiver in order to generate a three-dimensional map of the scanned area.Type: GrantFiled: May 4, 2017Date of Patent: May 21, 2019Assignee: APPLE INC.Inventors: Alexander Shpunt, Yuval Gerson
-
Patent number: 10289911Abstract: Architecture that detects entrances on building facades. In a first stage, scene geometry is exploited and the multi-dimensional problem is reduced down to a one-dimensional (1D) problem. Entrance hypotheses are generated by considering pairs of locations along lines exhibiting strong gradients in the transverse direction. In a second stage, a rich set of discriminative image features for entrances is explored according to constructed designs, specifically focusing on properties such as symmetry and color consistency, for example. Classifiers (e.g., random forest) are utilized to perform automatic feature selection and entrance classification. In another stage, a joint model is formulated in three dimensions (3D) for entrances on a given facade, which enables the exploitation of physical constraints between different entrances on the same facade in a systematic manner to prune false positives, and thereby select an optimum set of entrances on a given facade.Type: GrantFiled: October 23, 2017Date of Patent: May 14, 2019Assignee: Uber Technologies, Inc.Inventors: Jingchen Liu, Vasudev Parameswaran, Thommen Korah, Varsha Hedau, Radek Grzeszczuk, Yanxi Liu
-
Patent number: 10281565Abstract: A distance measuring device using a TOF (Time of Flight) scheme includes a controller, a light receiver, and a calculator. The controller generates a first exposure signal, a second exposure signal, a third exposure signal, and one particular exposure signal selected from the first, the second, and the third exposure signals. The light receiver performs a first exposing process, a second exposing process, a third exposing process, and a particular exposing process corresponding to the particular exposure signal out of the first, the second, and the third exposing processes. The calculator determines, based on a difference between an exposure amount obtained from the particular exposing process and an exposure amount obtained from an exposing process according to one of the first, second, and the third exposure signals corresponding to the particular exposure signal, whether or not the light emitted from the distance measuring device interferes with light emitted from other distance measuring device.Type: GrantFiled: November 22, 2016Date of Patent: May 7, 2019Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Syoma Takahashi, Haruka Takano, Tomohiko Kanemitsu
-
Patent number: 10268926Abstract: The present application discloses a method and an apparatus for processing point cloud data. The method in an embodiment comprises: presenting a to-be-labeled point cloud frame and a camera image formed by photographing an identical scene at an identical moment as the point cloud frame; determining, in response to an operation of selecting an object in the point cloud frame by a user, an area encompassing the selected object in the point cloud frame; projecting the area from the point cloud frame to the camera image, to obtain a projected area in the camera image; and adding a mark in the projected area, for labeling, by the user, the selected object in the point cloud frame according to the mark indicating an object in the camera image. This implementation can assist labeling personnel in rapidly and correctly labeling an object in a point cloud frame.Type: GrantFiled: January 19, 2017Date of Patent: April 23, 2019Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Kaiwen Feng, Zhuo Chen, Bocong Liu, Yibing Liang, Yu Ma, Haifeng Wang
-
Patent number: 10256923Abstract: Provided are a method and a device for generating a MIMO test signal which is configured to test a performance of MIMO wireless terminal. With the method, the plurality of space propagation matrixes of the MIMO testing system are acquired by performing the phase shift transformation on the plurality of calibration matrixes of the MIMO testing system, the target space propagation matrix having the isolation degree meeting the preset condition is determined according to the maximum amplitude value of elements in each inverse matrix of the plurality of space propagation matrixes, and the transmitting signal for test is generated by a calculation according to the throughput testing signal acquired by the pre-calculation and the target calibration matrix corresponding to the target space propagation matrix.Type: GrantFiled: March 8, 2018Date of Patent: April 9, 2019Assignee: GENERAL TEST SYSTEMS INC.Inventors: Yihong Qi, Wei Yu, Penghui Shen
-
Patent number: 10242504Abstract: A head-mounted display device includes an image display having an optical element to transmit light from an outside scene, a display element to display an image, a camera, a memory configured to store data of a marker image, and one or more processors. The one or more processors display an image on the image display and derive at least one of a camera parameter of the camera and a spatial relationship, based at least on an image that is captured by the camera in a condition that allows a user to visually perceive that the marker image displayed by the image display and a real marker corresponding to the marker image align at least partially with each other. The real marker includes a first set of feature elements within a rectangle, and the marker image includes a second set of feature elements.Type: GrantFiled: September 6, 2018Date of Patent: March 26, 2019Assignee: SEIKO EPSON CORPORATIONInventors: Jia Li, Guoyi Fu
-
Patent number: 10229502Abstract: A depth detection apparatus is described which has a memory and a computation logic. The memory stores frames of raw time-of-flight sensor data received from a time-of-flight sensor, the frames having been captured by a time-of-flight camera in the presence of motion such that different ones of the frames were captured using different locations of the camera and/or with different locations of an object in a scene depicted in the frames. The computation logic has functionality to compute a plurality of depth maps from the stream of frames, whereby each frame of raw time-of-flight sensor data contributes to more than one depth map.Type: GrantFiled: February 3, 2016Date of Patent: March 12, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Amit Adam, Sebastian Nowozin, Omer Yair, Shai Mazor, Michael Schober
-
Patent number: 10222876Abstract: A system includes a first sensor configured to measure a location of a first device, a second sensor configured to measure an orientation of a second device, a display, and a processor. The processor is configured to control the first sensor to start a first measurement, calculate distances between the location of the first device and each of a plurality of installation locations associated with each of a plurality of objects, each of the plurality of objects being arranged virtually at each of a plurality of installation locations in a real space, control the second sensor to start a second measurement when a distribution of the distances indicates that any of the plurality of installation locations of the plurality of objects is included in a given range from the first device, and control the display to display an object according to results of the first measurement and the second measurement.Type: GrantFiled: March 2, 2017Date of Patent: March 5, 2019Assignee: FUJITSU LIMITEDInventor: Susumu Koga
-
Patent number: 10207410Abstract: A system and apparatus for navigating and tracking a robotic platform includes a non-contact velocity sensor module set positioned on the robotic platform for measuring a velocity of the robotic platform relative to a target surface. The non-contact velocity sensor module set may include a coherent light source that is emitted towards the target surface and reflected back to the coherent light source. Measuring the change in intensity of the reflected coherent light source may be used to determine the velocity of the robotic platform based on the its relationship with the principles of a Doppler frequency shift. A communication unit may also be utilized to transmit data collected from the non-contact velocity sensor set to a computer for data processing. A computer is then provided on the robotic platform to process data collected from the non-contact velocity sensor set.Type: GrantFiled: September 15, 2016Date of Patent: February 19, 2019Assignee: Physical Optics CorporationInventors: Paul Shnitser, David Miller, Christopher Thad Ulmer, Volodymyr Romanov, Victor Grubsky
-
Patent number: 10205859Abstract: Mounting device 11 captures an image of reference mark 25 at a first position at which mounting head 22 is stopped under first imaging conditions, and captures an image of component 60 under second imaging conditions. Then, mounting device 11 moves mounting head 22 and captures an image of reference mark 25 at a second position at which mounting head 22 is stopped under the first imaging conditions, and captures a second image of component 60 under the second imaging conditions. Further, mounting device 11 generates an image of component 60 picked up by mounting head 22 based on the positional relationship of reference mark 25 using the first image and the second image.Type: GrantFiled: July 6, 2015Date of Patent: February 12, 2019Assignee: FUJI CORPORATIONInventors: Masafumi Amano, Kazuya Kotani
-
Patent number: 10198830Abstract: An information processing apparatus includes a correlation unit that correlates distance information indicating a distance to an emission position of electromagnetic waves emitted in a shooting direction of a plurality of image pickup units with a first pixel in a first image that constitutes images taken by the image pickup units, the distance information being obtained based on reflected waves of the electromagnetic waves and the first pixel corresponding to the emission position of the electromagnetic waves, and a generation unit that generates a parallax image by using the distance information correlated with the first pixel for parallax computation of pixels in a second image that constitutes the images.Type: GrantFiled: December 22, 2014Date of Patent: February 5, 2019Assignee: Ricoh Company, Ltd.Inventors: Hiroyoshi Sekiguchi, Soichiro Yokota, Shuichi Suzuki, Shin Aoki, Haike Guan, Jun Yoshida, Mitsuru Nakajima, Hideomi Fujimoto
-
Patent number: 10191183Abstract: PROBLEM: Efficiently generating digital terrain model (DTM) having high elevation surface accuracy and high point density, and suitable for controlling pavement milling machines during road repairs. SOLUTION: Combination of motorized levelling and Stop-Go mobile laser scanning system, including train of three vehicles, which are at standstill during measurements, and which move in unison in between measurements. Middle vehicle carries laser scanner, elevation sight, and GNSS receiver. Front and rear vehicle each carry levelling rod; front vehicle also carries GNSS receiver. During measurement cycle, laser scanner generates point cloud, while GNSS positions of middle and front vehicles and elevations at the resp. positions of front and rear vehicles are determined. After measurement cycle, vehicle train moves until rear vehicle halts at previous GNSS position of front vehicle, etc. When all measurement cycles are completed, collected data is integrated and transformed into a DTM.Type: GrantFiled: July 8, 2015Date of Patent: January 29, 2019Assignee: R.O.G. s.r.o.Inventors: Marek Prikryl, Lukas Kutil, Vitezslav Obr
-
Patent number: 10194079Abstract: A vehicle surveillance system is disclosed and includes a plurality of image capturing units, an image processing unit, and a display unit for monitoring a position of at least one target around a vehicle and measuring a distance between the at least one target and the vehicle. The vehicle surveillance system utilizes a space domain determination module, a time domain determination module, and a ground surface elimination module to transform original images of the target around the vehicle into the bird's-eye-view panorama, and further detects variation of an optical pattern incident onto the target through a light beam emitted by a light source so as to effectively remind a driver on the on-going vehicle of the position of the target and the distance, and real-time detect and capture an image of any person approaching the vehicle, thereby achieving driving safety and securing lives and personal properties.Type: GrantFiled: December 29, 2016Date of Patent: January 29, 2019Assignee: H.P.B. OPTOELECTRONIC CO., LTD.Inventors: Hsuan-Yueh Hsu, Szu-Hong Wang
-
Patent number: 10192332Abstract: A method of controlling display of object data includes calculating distances from a terminal to the positions of multiple items of the object data, determining, by a processor, an area based on the distribution of the calculated distances, and displaying object data associated with a position in the determined area on a screen.Type: GrantFiled: March 17, 2016Date of Patent: January 29, 2019Assignee: FUJITSU LIMITEDInventor: Susumu Koga
-
Patent number: 10180729Abstract: A human machine interface (HMI) minimizing the number of gestures for operation control in which user-intended operation commands are accurately recognized by dividing a vehicle interior into a plurality of regions. The HMI receives an input of a gesture according to each region, and controls any one device according to the gesture. Convenience of a user is improved because the gesture may be performed in a state in which region restriction is minimized by identifying an operation state and an operation pattern of an electronic device designated according to each region to recognize the user's intention when the user performs the gesture in a boundary portion between two regions or even in multiple regions.Type: GrantFiled: December 8, 2014Date of Patent: January 15, 2019Assignee: Hyundai Motor CompanyInventor: Hyungsoon Park
-
Patent number: 10168523Abstract: An image generating system that generates a focal image of a target object on a virtual focal plane located between a plurality of illuminators and an image sensor (b) carries out the following (c) through (f) for each of a plurality of pixels constituting the focal image, (c) carries out the following (d) through (f) for each of the positions of the plurality of illuminators, (d) calculates a position of a target point that is a point of intersection of a straight line connecting a position of the pixel on the focal plane and a position of the illuminator and a light receiving surface of the image sensor, (e) calculates a luminance value of the target point in the captured image by the illuminator on the basis of the position of the target point, (f) applies the luminance value of the target point to the luminance value of the pixel, and (g) generates the focal image of the target object on the focal plane by using a result of applying the luminance value at each of the plurality of pixels.Type: GrantFiled: December 22, 2016Date of Patent: January 1, 2019Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Yumiko Kato, Yoshihide Sawada, Masahiro Iwasaki, Yasuhiro Mukaigawa, Hiroyuki Kubo
-
Patent number: 10169678Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: January 1, 2019Assignee: LUMINAR TECHNOLOGIES, INC.Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Patent number: 10169680Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: January 1, 2019Assignee: LUMINAR TECHNOLOGIES, INC.Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Patent number: 10165247Abstract: An image device with image defocus function includes an image capture unit, a depth map generation unit, and a processor. The image capture unit captures an image corresponding to an object. The depth map generation unit generates a depth map corresponding to the object. The processor determines an integration block corresponding to each pixel image of the image according to a depth of the depth map corresponding to the each pixel image and a predetermined depth corresponding to the each pixel image, utilizes the integration block to generate a defocus color pixel value corresponding to the each pixel image, and outputs a defocus image corresponding to the image according to defocus color pixel values corresponding to all pixel images of the image, wherein the each pixel image of the image corresponds to a pixel of the image capture unit.Type: GrantFiled: May 16, 2016Date of Patent: December 25, 2018Assignee: eYs3D Microelectronics, Co.Inventor: Chi-Feng Lee