Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 10692449
    Abstract: An alignment method based on pixel color and an alignment system for using the same is disclosed. The method includes: Step S1: retrieving a photoresistance eigenvalue of each subcolor resist illuminated by a light source and setting a pixel threshold according to the photoresistance eigenvalue; Step S2: performing a binarization process on a pixel according to the pixel threshold to obtain a binary pixel; and Step S3: calculating an alignment position according to the binary pixel and aligning the color resist according to the alignment position. The present invention avoids the distribution of metal traces and patterns of a color filter on array (COA) product, improves the stability of measuring the COA product, aligns pixel with different shapes, and edits the computing logic for measured positions. When the shape of the pixel is irregular, the reasonable logic is selected to define the measured positions.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: June 23, 2020
    Assignee: SHENZHEN CHINA STAR OPTOELECTRONICS SEMICONDUCTOR DISPLAY TECHNOLOGY CO., LTD.
    Inventor: Daobo Yan
  • Patent number: 10682060
    Abstract: A photoacoustic apparatus is used which includes: a receiving element receiving acoustic wave from an object; a processor generating image data inside of the object; a changer changing irradiation positions of light on the object; and a wide-area image acquirer of the object, wherein the processor generates, for the irradiation positions, a local-area image of the object corresponding to the irradiation position, and based on a comparison between a plurality of local-area images obtained for the irradiation positions and a comparison between the plurality of local-area images and the wide-area image, integrates the plurality of local-area images.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: June 16, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Kazuhiro Miyasa, Ryo Ishikawa
  • Patent number: 10684372
    Abstract: Disclosed are systems, methods, and computer-readable storage media to control a vehicle. In one aspect, a method includes capturing point-cloud data representative of a surrounding of an autonomous vehicle with one or more LIDAR sensors, identifying a point in the point cloud data as a non-matching point in response to the point having no corresponding point in a map used to determine a position of the autonomous vehicle, determining whether the non-matching point is to be used in a determination of an overlap score based on one or more comparisons of the point cloud data and the map, determining the overlap score in response to the determining whether the non-matching point is to be used in the determination of the overlap score, determining a position of the autonomous vehicle based on the overlap score and the map, and controlling the autonomous vehicle based on the position.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: June 16, 2020
    Assignee: UATC, LLC
    Inventor: Kenneth James Jensen
  • Patent number: 10681331
    Abstract: Systems and/or methods for, for a given pixel (or sub-pixel location) in an image acquired by the camera, finding which projector pixel (or more particularly, which projector column) primarily projected the light that was reflected from the object being scanned back to this camera position (e.g. what projector coordinates or projector column coordinate correspond(s) to these camera coordinates).
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: June 9, 2020
    Assignee: MODit 3D, Inc.
    Inventor: James S. Page
  • Patent number: 10663588
    Abstract: A three-dimensional (3D) coordinate measurement device combines tracker and scanner functionality. The tracker function is configured to send light to a retroreflector and determine distance to the retroreflector based on the reflected light. The tracker is also configured to track the retroreflector as it moves, and to determine 3D coordinates of the retroreflector. The scanner is configured to send a beam of light to a point on an object surface and to determine 3D coordinate of the point. In addition, the scanner is configured to adjustably focus the beam of light.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: May 26, 2020
    Assignee: FARO TECHNOLOGIES, INC
    Inventors: Kenneth Steffey, Robert E. Bridges, David H. Parker
  • Patent number: 10657647
    Abstract: An image processing system detects changes in objects, such as damage to automobiles, by comparing a base object model, which depicts the object in an expected condition, to one or more target images of the object in the changed condition. The image processing system first processes a target object image to detect one or more predefined landmarks in the target object image and corrects for camera and positional distortions by determining a camera model for the target object image based on the detected landmarks.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: May 19, 2020
    Assignee: CCC INFORMATION SERVICES
    Inventors: Ke Chen, John L. Haller, Takeo Kanade, Athinodoros S. Georghiades
  • Patent number: 10648795
    Abstract: A distance measuring apparatus measures a distance to a target based on reflected light in response to launched laser beam. A distance measuring process includes generating a difference binary image from first and second range images that are respectively generated in states without and with the target in front of a background and represent distances to each of range measurement points, extracting a first region greater than a first threshold from a non-background region of the difference binary image made up of non-background points, grouping adjacent points on the second range image into groups of adjacent points having close distance values, for each point within the first region, to extract second regions corresponding to the groups, and extracting a third region smaller than a second threshold from the second regions, to judge that each point within the third region is edge noise.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: May 12, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Satoru Ushijima
  • Patent number: 10650242
    Abstract: An information processing apparatus includes at least one processor causing the information processing apparatus to act as a first obtainment unit configured to execute processing for obtaining a first feature amount for each of a plurality of frames, a specification unit configured to specify a priority order of frames for obtaining a second feature amount different from the first feature amount based on the first feature amount obtained by the first obtainment unit, a second obtainment unit configured to execute processing for obtaining the second feature amount from a frame in accordance with the priority order, and a selection unit configured to select, based on the second feature amount obtained by the second obtainment unit, an image processing target frame. The number of frames from which the second feature amount is obtained is fewer than the number of the plurality of frames from which the first feature amount is obtained.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: May 12, 2020
    Assignee: Canon Kabushiki Kaisha
    Inventors: Tatsuya Yamamoto, Sammy Chan
  • Patent number: 10643364
    Abstract: In some implementations, a system may include a camera, a display, one or more memories, and one or more processors communicatively coupled to the one or more memories. The system may identify a horizontal plane in an image being captured by the camera and presented on the display, may determine a size of the horizontal plane, and may determine that the size of the horizontal plane satisfies a threshold. The system may designate the horizontal plane as a ground plane based on determining that the size of the horizontal plane satisfies the threshold. The system may output an indication that the horizontal plane has been designated as the ground plane.
    Type: Grant
    Filed: January 14, 2020
    Date of Patent: May 5, 2020
    Assignee: Capital One Services, LLC
    Inventors: Geoffrey Dagley, Jason Hoover, Qiaochu Tang, Stephen Wylie, Sunil Vasisht, Micah Price
  • Patent number: 10635946
    Abstract: The present application provides an eyeglass positioning method. The method includes: acquiring a real-time image shot by a shooting apparatus, and extracting a real-time face image from the real-time image using a face recognition algorithm; recognizing whether the real-time face image includes eyeglasses using a predetermined first classifier, and outputting a recognition result; and positioning the eyeglasses in the real-time face image using a predetermined second classifier and outputting a positioning result when the recognition result is that the real-time face image includes the eyeglasses. The present application also provides an electronic apparatus and a computer readable storage medium. The present application adopts two classifiers to detect images in eyeglass regions in the face images, thereby enhancing precision and accuracy of eyeglass detection.
    Type: Grant
    Filed: October 31, 2017
    Date of Patent: April 28, 2020
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventor: Lei Dai
  • Patent number: 10627228
    Abstract: The purpose of the present invention is to provide an object detection device conducive to carrying out control in an appropriate manner according to the surrounding environment, with consideration to the accuracy of locations of detected objects. The device is characterized by being provided with: a parallax information generation unit for generating parallax information from a plurality of parallax images acquired from a plurality of imaging units; an object detection unit for detecting objects contained in the parallax images; a location information generation unit for generating location information about the objects, on the basis of the parallax information; and a location accuracy information generation unit for generating location accuracy information pertaining to the accuracy of the location information, on the basis of the condition of generation of the parallax information.
    Type: Grant
    Filed: January 29, 2016
    Date of Patent: April 21, 2020
    Assignee: HITACHI AUTOMOTIVE SYSTEMS, LTD.
    Inventors: Masayuki Takemura, Takuma Osato, Takeshi Shima, Yuji Otsuka
  • Patent number: 10623661
    Abstract: An electronic device is provided. The electronic device includes a housing, a first image sensor configured to provide a first angle of view, a second image sensor configured to provide a second angle of view, and a processor. The processor is configured to obtain a first image having first resolution for a plurality of objects outside the electronic device, the plurality of objects corresponding to the first angle of view, to obtain a second image having second resolution for some objects corresponding to the second angle of view among the plurality of objects, to crop a third image having the second resolution, corresponding to at least part of the some objects, from the second image based on at least a depth-map using the first image and the second image, and to compose the third image with a region corresponding to the at least part of the some objects in the first image.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: April 14, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seung Won Oh, Jung Sik Park, Jin Kyung Lee
  • Patent number: 10621446
    Abstract: A method of optical flow estimation is provided that includes identifying a candidate matching pixel in a reference image for a pixel in a query image, determining a scaled binary pixel descriptor for the pixel based on binary census transforms of neighborhood pixels corresponding to scaling ratios in a set of scaling ratios, determining a scaled binary pixel descriptor for the candidate matching pixel based on binary census transforms of neighborhood pixels corresponding to scaling ratios in the set of scaling ratios, and determining a matching cost of the candidate matching pixel based on the scaled binary pixel descriptors.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: April 14, 2020
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Hrushikesh Tukaram Garud, Manu Mathew, Soyeb Noormohammed Nagori
  • Patent number: 10614290
    Abstract: The present invention provides an object position determination circuit including a receiving circuit, a detecting circuit and a calculating circuit. In the operations of the object position determination circuit, the receiving circuit is configured to receive an Nth frame and an (N+M)th frame of an image signal, where N is a positive integer, and M is a positive integer greater than one; the detecting circuit is configured to detect positions of an object in the Nth frame and the (N+M)th frame; and the calculating circuit is configured to generate a position of the object in an (N+M+A)th frame according to the positions of the object in the Nth frame and the (N+M)th frame, wherein A is a positive integer.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: April 7, 2020
    Assignee: Realtek Semiconductor Corp.
    Inventors: Teng-Hsiang Yu, Yen-Hsing Wu
  • Patent number: 10616552
    Abstract: Methods, apparatuses and systems may provide for conducting a quality assessment of a depth localization mode, a color localization mode and an inertia localization mode, and selecting one of the depth localization mode, the color localization mode or the inertia localization mode as an active localization mode based on the quality assessment. Additionally, a pose of a camera may be determined relative to a three-dimensional (3D) environment in accordance with the active localization mode.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: April 7, 2020
    Assignee: Intel Corporation
    Inventors: Daniel J. Mirota, Samer S. Barakat, Haowei Liu, Duc Q. Pham, Mohamed Selim Ben Himane
  • Patent number: 10607096
    Abstract: Embodiments of a Z-dimension user-feedback biometric system are provided. In some embodiments, a camera captures subject images positioned along a plurality of Z-dimension positions, including a normal subject image for a mid-range of a capture volume and one or both of the following: (a) a close subject image for a front of the capture volume and (b) a far subject image for a back of the capture volume. In some embodiments, a processing element can be configured to create a normal display image, as well as a close display image (with a first exaggerated quality) and/or a far display image (with a second exaggerated quality). In some embodiments, the first exaggerated quality can be a positive exaggeration of a quality and the second exaggerated quality can be a negative exaggeration of the quality.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: March 31, 2020
    Assignee: Princeton Identity, Inc.
    Inventors: Barry E. Mapen, John Timothy Green, Kevin Richards, John Amedeo Santini, Jr.
  • Patent number: 10607310
    Abstract: An aerial vehicle may be outfitted with two or more digital cameras that are mounted to a track, a rail or another system for accommodating relative motion between the cameras. A baseline distance between the cameras may be established by repositioning one or more of the cameras. Images captured by the cameras may be processed to recognize one or more objects therein, and to determine ranges to such objects by stereo triangulation techniques. The baseline distances may be varied by moving one or more of the cameras, and ranges to objects may be determined using images captured by the cameras at each of the baseline distances.
    Type: Grant
    Filed: October 17, 2017
    Date of Patent: March 31, 2020
    Assignee: Amazon Technologies, Inc.
    Inventor: Conner Riley Thomas
  • Patent number: 10595003
    Abstract: The stereo camera apparatus includes a stereo camera and a first controller configured to detect a target object to be detected based on at least one first region among a plurality of regions located at different positions in a predetermined direction in an image captured by the stereo camera, generate interpolation pixels by performing pixel interpolation based on at least original pixels that constitute an image of the detected object and detect distance from a reference position to a position of the detected object based on at least the interpolation pixels. As a result, a stereo camera apparatus capable of detecting an object located far away from a vehicle with a high accuracy while suppressing the processing load and a vehicle can be provided.
    Type: Grant
    Filed: October 23, 2015
    Date of Patent: March 17, 2020
    Assignee: KYOCERA Corporation
    Inventor: Naoto Ohara
  • Patent number: 10591291
    Abstract: A distance measuring device coupled to a camera and a rotator for driving the camera to rotate. The camera includes a photo sensor and a lens. The distance measuring device includes a distance obtaining module, an angle obtaining module, and a computing module coupled to the distance obtaining module and the angle obtaining module. The distance obtaining module is configured to obtain an unaligned target image of a target captured by the camera. A projection of the unaligned target image on a reference plane does not overlap with a projection of a center point of the photo sensor on the reference plane. The reference plane is perpendicular to a rotation axis of the camera. The distance obtaining module is further configured to calculate a projection distance between the projection of the unaligned target image and the projection of the center point.
    Type: Grant
    Filed: January 3, 2017
    Date of Patent: March 17, 2020
    Assignees: BOE TECHNOLOGY GROUP CO., LTD., PEKING UNIVERSITY
    Inventors: Yanbing Wu, Xing Zhang, Yi Wang
  • Patent number: 10586456
    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: March 10, 2020
    Assignee: TuSimple
    Inventor: Panqu Wang
  • Patent number: 10565690
    Abstract: An extraction unit extracts, as a candidate pixel of a disturbance object, a predetermined number of pixels in order of smaller distance from each distance image acquired by a distance measurement sensor, the predetermined number being one or more. A calculation unit calculates a feature value indicating a characteristic of temporal change between each candidate pixel in a distance image of a current frame and a corresponding candidate pixel in a distance image of a past frame. A removal unit specifies, as a pixel indicating a disturbance object, a candidate pixel the feature value of which calculated by the calculation unit is larger than a predetermined reference feature value, and removes the specified pixel from the distance image of the current frame.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: February 18, 2020
    Assignees: Kobe Steel, Ltd., KOBELCO CONSTRUCTION MACHINERY CO., LTD.
    Inventor: Takashi Hiekata
  • Patent number: 10546377
    Abstract: An image processing apparatus obtains, for each of a plurality of subjects, a data set including first shape data which indicates a shape of a subject measured in association with the subject in a first state, and second shape data which indicates a shape of the subject measured in association with the subject in a second state, obtains basis data required to express a deformation from the first state to the second state, based on the data sets for the plurality of subjects, and estimates, based on the generated basis data and data indicating a shape of a target subject measured in association with the target subject in the first state, a deformation from the first state to the second state in association with the target subject.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: January 28, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Ryo Ishikawa, Hiroyuki Yamamoto
  • Patent number: 10540778
    Abstract: The systems and methods disclosed herein provide determination of an orientation of a feature towards a reference target. As a non-limiting example, a system consistent with the present disclosure may include a processor, a memory, and a single camera affixed to the ceiling of a room occupied by a person. The system may analyze images from the camera to identify any objects in the room and their locations. Once the system has identified an object and its location, the system may prompt the person to look directly at the object. The camera may then record an image of the user looking at the object. The processor may analyze the image to determine the location of the user's head and, combined with the known location of the object and the known location of the camera, determine the direction that the user is facing. This direction may be treated as a reference value, or “ground truth.” The captured image may be associated with the direction, and the combination may be used as training input into an application.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Glen J. Anderson, Giuseppe Raffa, Carl S. Marshall, Meng Shi
  • Patent number: 10540774
    Abstract: A structured light depth sensor for sensing a depth of an object, comprises: a projector for projecting structured lights with different codes to the object; a camera located on one side of the projector and configured for capturing the structured lights reflected by the object; a storage device for storing parameter information of the camera and distance information between the projector and the camera; and a processor electrically connected to the projector, the camera, and the storage device. The processor controls the projector to sequentially project the structured lights with different codes to the object, controls the camera to sequentially capture the structured lights reflected by the object, and calculates the depth of the object. A sensing method adapted for the structured light depth sensor is also provided.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: January 21, 2020
    Assignees: Interface Technology (ChengDu) Co., Ltd., INTERFACE OPTOELECTRONICS (SHENZHEN) CO., LTD., GENERAL INTERFACE SOLUTION LIMITED
    Inventor: Yi-San Hsieh
  • Patent number: 10534970
    Abstract: A system and method of inspection may include capturing image data by a stereo imaging device. A determination as to whether noise indicative of a transparent or specular object exists in the image data may be made. A report that a transparent or specular object was captured in the image data may be made.
    Type: Grant
    Filed: December 24, 2014
    Date of Patent: January 14, 2020
    Assignee: Datalogic IP Tech S.R.L.
    Inventor: Nicoletta Laschi
  • Patent number: 10529082
    Abstract: A three-dimensional (3D) geometry measurement apparatus includes a projection part, a capturing part that generates a captured image of an object to be measured to which a projection image is projected, an analyzing part that obtains correspondences between projection pixel positions that are pixel positions of the projection image and captured pixel positions that are pixel positions of the captured image, a line identification part that identifies a first epipolar line of the capturing part corresponding to the captured pixel positions or a second epipolar line of the projection part corresponding to the projection pixel positions, a defective pixel detection part that detects defective pixels based on a positional relationship between the projection pixel positions and the first epipolar line or a positional relationship between the projection pixel positions and the second epipolar line, and a geometry identification part.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: January 7, 2020
    Assignee: MITUTOYO CORPORATION
    Inventor: Kaoru Miyata
  • Patent number: 10529083
    Abstract: A method for estimating distance of an object from a moving vehicle is provided. The method includes detecting, by a camera module in one or more image frames, an object on a road on which the vehicle is moving. The method includes electronically determining a pair of lane markings associated with the road. The method further includes electronically determining a lane width between the pair of the lane markings in an image coordinate of the one or more image frames. The lane width is determined at a location of the object on the road. The method includes electronically determininga real world distance of the object from the vehicle based at least on number of pixels corresponding to the lane width in the image coordinate, a pre-defined lane width associated with the road and at least one camera parameter of the camera module.
    Type: Grant
    Filed: December 6, 2017
    Date of Patent: January 7, 2020
    Assignee: Lighmetrics Technologies Pvt. Ltd.
    Inventors: Mithun Uliyar, Ravi Shenoy, Soumik Ukil, Krishna A G, Gururaj Putraya, Pushkar Patwardhan
  • Patent number: 10509983
    Abstract: A technique for efficiently calibrating a camera is provided. Reference laser scan data is obtained by scanning a building 131 by a laser scanner 115, which is fixed on a vehicle 100 and has known exterior orientation parameters, while the vehicle 100 travels. An image of the building 131 is photographed at a predetermined timing by an onboard camera 113. Reference point cloud position data, in which the reference laser scan data is described in a coordinate system defined on the vehicle 100 at the predetermined timing, is calculated based on the trajectory the vehicle 100 has traveled. Matching points are selected between feature points in the reference point cloud position data and in the image. Exterior orientation parameters of the camera 113 are calculated based on relative relationships between the reference point cloud position data and image coordinate values in the image of the matching points.
    Type: Grant
    Filed: September 2, 2015
    Date of Patent: December 17, 2019
    Assignee: KABUSHIKI KAISHA TOPCON
    Inventors: You Sasaki, Tadayuki Ito
  • Patent number: 10489639
    Abstract: Methods, apparatus and systems for recognizing sign language movements using multiple input and output modalities. One example method includes capturing a movement associated with the sign language using a set of visual sensing devices, the set of visual sensing devices comprising multiple apertures oriented with respect to the subject to receive optical signals corresponding to the movement from multiple angles, generating digital information corresponding to the movement based on the optical signals from the multiple angles, collecting depth information corresponding to the movement in one or more planes perpendicular to an image plane captured by the set of visual sensing devices, producing a reduced set of digital information by removing at least some of the digital information based on the depth information, generating a composite digital representation by aligning at least a portion of the reduced set of digital information, and recognizing the movement based on the composite digital representation.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: November 26, 2019
    Assignee: AVODAH LABS, INC.
    Inventors: Michael Menefee, Dallas Nash, Trevor Chandler
  • Patent number: 10480934
    Abstract: An apparatus sequentially acquires, from a plurality of reference imaging devices for imaging a silhouette imaged with a base imaging device from viewpoints different from a viewpoint of the base imaging device, silhouette existing position information based on the reference imaging devices, and transforms the silhouette existing position information into a common coordinate system, where the silhouette existing position information indicates an existing position of the silhouette. The apparatus detects a silhouette absence range in which the silhouette does not exist, based on a result of comparison of the silhouette existing position information acquired this time and the silhouette existing position information acquired last time, and searches a range in which the silhouette exists, based on the silhouette absence range.
    Type: Grant
    Filed: December 5, 2018
    Date of Patent: November 19, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Tomonori Kubota, Yasuyuki Murata, Masahiko Toichi
  • Patent number: 10482848
    Abstract: One or more processors receive a map image file. The map image file includes geographical data. One or more processors convert the map image file into a raster map image file. The raster map image file includes one or more first raster images. The one or more first raster images include a first plurality of pixels. One or more processors label one or more of the first plurality of pixels with a first set of geographical coordinates using at least a portion of the geographical data included in the map image file.
    Type: Grant
    Filed: August 7, 2015
    Date of Patent: November 19, 2019
    Assignee: International Business Machines Corporation
    Inventor: Rui He
  • Patent number: 10479376
    Abstract: A self-driving vehicle (SDV) can operate by analyzing sensor data to autonomously control acceleration, braking, and steering systems of the SDV along a current route. The SDV includes a number of sensors generating the sensor data and a control system to detect conditions relating to the operation of the SDV, such as vehicle speed and local weather, select a set of sensors based on the detected conditions, and prioritize the sensor data generated from the selected set of sensors to control aspects relating to the operation of the SDV.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: November 19, 2019
    Assignee: UATC, LLC
    Inventors: Eric Meyhofer, David Rice, Scott Boehmke, Carl Wellington
  • Patent number: 10473766
    Abstract: A LiDAR system and scanning method creates a two-dimensional array of light spots. A scan controller causes the array of light spots to move back and forth so as to complete a scan of the scene. The spots traverse the scene in the first dimensional direction and in the second dimensional direction without substantially overlapping points in the scene already scanned by other spots in the array. An arrayed micro-optic projects the light spots. Receiver optics includes an array of optical detection sites. The arrayed micro-optic and the receiver optics are synchronously scanned while maintaining a one-to-one correspondence between light spots in the two dimensional array and optical detection sites in the receiver optics.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: November 12, 2019
    Assignee: The Charles Stark Draper Laboratory, Inc.
    Inventor: Steven J. Spector
  • Patent number: 10466715
    Abstract: An apparatus for controlling narrow road driving of a vehicle includes: an image transform unit generating a depth map using depth information of an object in a front image of a road on which the vehicle travels and generating a height map of the front image by transforming the generated depth map; a map analysis unit recognizing the object and calculating a driving allowable area of the road based on the generated height map; a determination unit determining whether the road is a narrow road based on the calculated driving allowable area and, when the road is determined to be the narrow road, determining whether the vehicle is able to pass through the narrow road; and a signal processing unit controlling driving of the vehicle on the narrow road based on the determination of whether the vehicle is able to pass through the narrow road.
    Type: Grant
    Filed: July 20, 2017
    Date of Patent: November 5, 2019
    Assignees: Hyundai Motor Company, Kia Motors Corporation, Industry-University Cooperation Foundation Hanyang University
    Inventors: Byung Yong You, Jong Woo Lim, Keon Chang Lee
  • Patent number: 10452936
    Abstract: Exemplary embodiments are directed to biometric analysis systems generally including one or more illumination sources, a camera, and an analysis module. The illumination sources are configured to illuminate at least a portion of a face of a subject. The camera is configured to capture one or more images of the subject during illumination of the face of the subject. The analysis module is configured to analyze the one or more images captured by the camera to determine an indication of liveliness of the subject and prevent spoofing.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: October 22, 2019
    Assignee: Princeton Identity
    Inventors: Barry E. Mapen, David Alan Ackerman, Michael J. Kiernan
  • Patent number: 10434649
    Abstract: A workpiece pick up system including: a three-dimensional sensor which is placed at an upper side of a container and which obtains a group of three-dimensional points each of which has height position information in the container, a group creating means which creates a plurality of three-dimensional point groups in each of which adjacent points satisfy a predetermined condition, an exclusion group determining means which determines that one or more three-dimensional point groups which satisfy at least one of a predetermined size reference, a predetermined area reference, and a predetermined length reference are excluded groups, and a workpiece detecting means which obtains a group of detection-purpose three-dimensional points for detecting workpieces by excluding points included in the excluded group from the group of three-dimensional points or the plurality of three-dimensional point groups, and which detects the workpieces to be picked up by using the group of detection-purpose three-dimensional points.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: October 8, 2019
    Assignee: Fanuc Corporation
    Inventor: Toshiyuki Ando
  • Patent number: 10440347
    Abstract: Depth information can be used to assist with image processing functionality, such as to generate modified image data including foreground image data and background image data having different amounts of blur. In at least some embodiments, depth information obtained from first image data associated with a first sensor and second image data associated with a second sensor, for example, can be used to determine the foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images and merging the adjusted subsequent frames or images. Such an approach provides modified image data including a foreground object having a first amount of blur (e.g., a lesser amount of blur) and/or background object(s) having a second amount of blur (e.g.
    Type: Grant
    Filed: January 2, 2017
    Date of Patent: October 8, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Dong Zhou
  • Patent number: 10430659
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for urban road recognition based on a laser point cloud. The method comprises: constructing a corresponding road edge model according to the laser point cloud acquired by a laser sensor; determining a height of a mobile carrier provided with the laser sensor and constructing a corresponding road surface model based on the height and the laser point cloud; eliminating a road surface point cloud and a road edge point cloud in the laser point cloud according to the road edge model and the road surface model, segmenting a remaining laser point cloud using a point cloud segmentation algorithm, and recognizing an object corresponding to a segmenting result.
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: October 1, 2019
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Yu Jiang, Yang Yan
  • Patent number: 10430047
    Abstract: In some examples, an electronic device may reduce the resolution or otherwise downsize content items to conserve storage space on the electronic device. Further, the electronic device may offload full resolution versions of content items that have been downsized, and the full resolution versions may be stored at a cloud storage or other network storage location. Subsequently, if the user, an operating system module, or an application on the electronic device requests a higher resolution version of the downsized content item, the higher resolution version may be downloaded from the network storage to the electronic device. Various techniques may be used for determining a size or resolution of the content item to download from the network storage.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: October 1, 2019
    Assignee: Razer (Asia-Pacific) Pte. Ltd.
    Inventors: Michael A. Chan, Justin Quan, Brian Chu, Aanchal Jain
  • Patent number: 10422879
    Abstract: A time-of-flight distance measuring device divides a base exposure period into a plurality of sub exposure periods and holds without resetting an electric charge stored in the sub exposure period for a one round period which is one round of the plurality of sub exposure periods. The distance measurement value of short time exposure is acquired during the one round period and the distance measurement value of long time exposure is acquired during a plurality of the one round periods. Both of the distance measurement value of the long time exposure and the distance measurement value of the short time exposure can be acquired from the same pixel. With this, a dynamic range is expanded without being restricted by a receiving state of reflected light, optical design of received light, and an arrangement of pixels.
    Type: Grant
    Filed: November 12, 2015
    Date of Patent: September 24, 2019
    Assignee: DENSO CORPORATION
    Inventor: Toshiaki Nagai
  • Patent number: 10410054
    Abstract: An image processing method causing an image processing device to execute a process including: obtaining a first image and a second image captured at different timings for an identical inspection target by passing through an imaging range of an image sensor row; extracting respective feature points of the first image and the second image; associating the feature points of the first image and the feature points of the second image with each other; estimating a conversion formula to convert the feature point of the second image to the feature point of the first image based on a restraint condition of a quadratic equation, in accordance with respective coordinates of three or more sets of the feature points associated between the first image and the second image; and converting the second image into a third image corresponding to the first image based on the estimated conversion formula.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: September 10, 2019
    Assignee: FUJITSU LIMITED
    Inventors: Yusuke Nonaka, Eigo Segawa
  • Patent number: 10401867
    Abstract: A method for autonomously controlling a feed mixing vehicle, a vehicle having an autonomous controller, a computer program for carrying out the method, and a control device. The vehicle has a chassis, working elements for carrying out partial tasks, scanners/sensors for transmitting data, and a computer for controlling all the processes. The scanners/sensors acquire spatial data of the surroundings and generate therefrom a 3D map of the current geometry of the surroundings. The current geometry of the surroundings is placed in relationship with an area that is released to be traveled on by the autonomous vehicle, with the result that the navigability of the travel path of the autonomous vehicle is checked, and in the case of detected non-navigability the travel path is adapted autonomously to the requirements of the situational spatial surroundings and is replaced by an alternative travel path.
    Type: Grant
    Filed: November 17, 2015
    Date of Patent: September 3, 2019
    Assignee: B. Strautmann & Söhne GmbH u. Co. KG
    Inventors: Wolfgang Strautmann, Johannes Marquering, Andreas Trabhardt
  • Patent number: 10402676
    Abstract: An automated method performed by at least one processor running computer executable instructions stored on at least one non-transitory computer readable medium, comprising: classifying first data points identifying at least one man-made roof structure within a point cloud and classifying second data points associated with at least one of natural structures and ground surface to form a modified point cloud; identifying at least one feature of the man-made roof structure in the modified point cloud; and generating a roof report including the at least one feature.
    Type: Grant
    Filed: February 9, 2017
    Date of Patent: September 3, 2019
    Assignee: Pictometry International Corp.
    Inventors: Yandong Wang, Frank Giuffrida
  • Patent number: 10403014
    Abstract: An image processing apparatus includes a setting unit setting the number of pieces of image data to be selected, an identifying unit identifying, based on a photographing date and time of each piece of image data of an image data group, a photographing period of the image data group, a dividing unit dividing the identified photographing period into a plurality of photographing sections, a selection unit selecting image data from an image data group corresponding to a target photographing section based on predetermined criteria, and a generation unit generating a layout image by arranging an image based on the selected image data, wherein selection of image data is repeated by setting an unselected photographing section as a next target photographing section to select a number of pieces of image data corresponding to the set number, and wherein the number of photographing sections is determined according to the set number.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: September 3, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Hiroyasu Kunieda, Masaaki Obayashi, Yoshinori Mizoguchi, Fumitaka Goto, Masao Kato, Maya Kurokawa
  • Patent number: 10399233
    Abstract: A robot includes: a base; a robot arm rotatably provided around a rotation axis relative to the base; a mark which rotates in accordance with rotation of the robot arm; a capturing element which captures the mark; a memory which stores a reference image therein; and a determination section which determines a rotation state of the robot arm by template matching by subpixel estimation using the reference image and an image captured by the capturing element, in which a relationship of 2R/B?L/X?100R/B is satisfied when a viewing field size per one pixel of the capturing element is B, a distance between the rotation axis and the center of the mark is R, the maximum distance between the rotation axis and a tip of the robot arm is L, and repetition positioning accuracy of the tip of the robot arm is X.
    Type: Grant
    Filed: January 17, 2018
    Date of Patent: September 3, 2019
    Assignee: Seiko Epson Corporation
    Inventor: Daiki Tokushima
  • Patent number: 10393536
    Abstract: A probe information storing unit is provided to store therein probe information items of a vehicle, and a voxel storing unit is provided to store therein voxels, which are defined and later described, in association with position information of the respective voxels. The voxels are defined in a three-dimensional space based on map data. The voxel storing unit and the probe information storing unit are referred to, and the probe information items are given, as votes, to the voxels that correspond to position information of the respective probe information items. A statistical process is executed to the probe information items given to each of the voxels, and the process results are associated with the respective voxels.
    Type: Grant
    Filed: January 29, 2018
    Date of Patent: August 27, 2019
    Assignee: TOYOTA MAPMASTER INCORPORATED
    Inventors: Naoki Kitagawa, Yumiko Yamashita, Yoshihiro Ui
  • Patent number: 10390057
    Abstract: Reception-side processing performed in a case where transmission of standard dynamic range video data and transmission of high dynamic range video data coexist in a time sequence is simplified. SDR transmission video data is converted into SDR transmission video data through dynamic range conversion. The SDR transmission video data is the one obtained by performing, on SDR video data, photoelectric conversion in accordance with an SDR photoelectric conversion characteristic. In this case, the conversion is performed on the basis of conversion information for converting a value of conversion data in accordance with the SDR photoelectric conversion characteristic into a value of conversion data in accordance with an HDR photoelectric conversion characteristic. A video stream is obtained by performing encoding processing on HDR transmission video data. A container having a predetermined format and including this video stream is transmitted.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: August 20, 2019
    Assignee: SONY CORPORATION
    Inventor: Ikuo Tsukagoshi
  • Patent number: 10384609
    Abstract: A vehicle hitch assistance system includes first and second cameras on an exterior of the vehicle and an image processor. The image processor is programmed to identify an object in image data received from the first and second cameras and determine a height and a position of the object using the image data. The system further includes a controller outputting a steering command to a vehicle steering system to selectively guide the vehicle away from or into alignment with the object.
    Type: Grant
    Filed: June 20, 2017
    Date of Patent: August 20, 2019
    Assignee: Ford Global Technologies, LLC
    Inventors: Yi Zhang, Erick Michael Lavoie
  • Patent number: 10371818
    Abstract: A system and method for forming an image of a target with a laser detection and ranging system. The system includes a laser transmitter and an array detector. The method includes transmitting a sequence of laser pulses; forming a plurality of point clouds, each point cloud corresponding to a respective transmitted laser pulse, each point in the point cloud corresponding to a point on a surface of the target; grouping the plurality of point clouds into a plurality of point cloud groups according to a contiguous subset of the sequence of laser pulses; forming a plurality of average point clouds, each of the average point clouds being the average of a respective group of the plurality of point cloud groups; and forming a first estimate of a six-dimensional velocity of the target, including three translational velocity components and three angular velocity components, from the plurality of average point clouds.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: August 6, 2019
    Assignee: RAYTHEON COMPANY
    Inventors: Eran Marcus, Vitaliy M. Kaganovich
  • Patent number: 10360449
    Abstract: The systems may include dividing a digital map provided by a mapping system into a matrix having a plurality of cells; assigning a cell of the plurality of cells to encompass a geographic region of the digital map; calculating a number of sites of interest in the cell; creating a marker comprising a first count number representing the number of sites of interest in the cell; and sharing the marker with a browser for display on the digital map.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: July 23, 2019
    Assignee: AMERICAN EXPRESS TRAVEL RELATED SERVICES COMPANY, INC.
    Inventors: Shivakumar Chandrashekar, Raju Rathi, Yogesh Tayal, Kunal Upadhyay, Purushotham Vunnam