Range Or Distance Measuring Patents (Class 382/106)
  • Patent number: 11341370
    Abstract: The present disclosure relates to training a machine learning model to classify images. An example method generally includes receiving a training data set including images in a first category and images in a second category. A convolutional neural network (CNN) is trained using the training data set, and a feature map is generated from layers of the CNN based on features of images in the training data set. A first area in the feature map including images in the first category and a second area in the feature map where images in the first category overlap with images in the second category are identified. The first category is split into a first subcategory corresponding to the first area and a second subcategory corresponding to the second area. The CNN is retrained based on the images in the first subcategory, images in the second subcategory, and images in the second category.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: May 24, 2022
    Assignee: International Business Machines Corporation
    Inventors: Peng Ji, Guo Qiang Hu, Yuan Yuan Ding, Jun Zhu, Jing Chang Huang, Sheng Nan Zhu
  • Patent number: 11332172
    Abstract: In a method of determining a radius or diameter of a train wheel, a camera mounted on a train acquires first and second images (pictures) of first and second objects spaced along a path being traveled by the train. Matches are then determined between the first and second objects appearing in the first and second acquired images and representations (pictures) of the first and second objects appearing in prerecorded images included in a track database that include corresponding first and second geographical locations. A distance L traveled by the train between the first and second geographical locations is determined and a sum C of electrical pulses generated by an encoder coupled to the train wheel during travel of the train the distance L is determined. Based on the distance L and the sum C, a diameter or radius of the wheel is determined.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: May 17, 2022
    Assignee: Westinghouse Air Brake Technologies Corporation
    Inventor: James A. Oswald
  • Patent number: 11330980
    Abstract: A Bowman's Refractive Index (BRI) for quantification of microdistortions in Bowman's Layer (BL) after Small Incision Lenticule Extraction (SMILE) is defined for a patient. BRI is summation of one or more areas of the OCT image of anterior edge of Bowman's layer, quantifies the smoothness of the Bowman's layer. The anterior edge of Bowman's layer is segmented into pixels. After segmentation, a 3rd order polynomial is curve fit to the segmented pixels of the edge of Bowman's layer. BRI is calculated by segmentation of the 3-Dimensional (3-D) OCT image. BRI acts as a marker for mechanical stability and is useful for diagnosis of disease and prognosis of treatments in human.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: May 17, 2022
    Assignee: Narayana Nethralaya Foundation
    Inventors: Abhijit Sinha Roy, Rohit Shetty, Bhujang Shetty
  • Patent number: 11335105
    Abstract: Methods and systems to focus an imager for machine vision applications are disclosed. A disclosed example machine vision method includes: capturing, via an imaging assembly, an image of an indicia appearing within a field of view (FOV) of the imaging assembly; recognizing, via a controller, the indicia as a focus adjustment trigger, the focus adjustment trigger operative to trigger an adjustment of at least one focus parameter associated with the imaging assembly; adjusting the at least one focus parameter based at least in part on the indicia; locking the at least one focus parameter such that the at least one focus parameter remains unaltered for a duration; and responsive to the locking of the at least one focus parameter, capturing, via the imaging assembly, at least one subsequent image of an object of interest.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: May 17, 2022
    Assignee: Zebra Technologies Corporation
    Inventors: Igor Vinogradov, Miroslav Trajkovic, Heng Zhang
  • Patent number: 11326874
    Abstract: A structured light projection optical system for obtaining 3D data of an object surface includes a structured light projection optical part configured to project a plurality of patterns onto an object or a screen, and an imaging optical part configured to obtain 3D data by photographing the patterns being projected from the structured light projection optical part. The structured light projection optical part includes a plurality of light sources, and a plurality of pattern masks. As the plurality of light sources are turned on and off, the pattern mask matches any one of the plurality of light sources illuminating a light, and the plurality of patterns are projected on the object or the screen by the pattern mask. Accordingly, various patterns can be effectively projected, real-time measurement can be easily performed through a quick pattern change, and the accurate 3D data can be obtained.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: May 10, 2022
    Assignee: MEDIT CORP.
    Inventors: Soo bok Lee, Seung Jin Lee, Eun Gil Cho
  • Patent number: 11323635
    Abstract: An imaging device that reduces differences between a bird's eye image and an actually measured distance includes imaging cameras mounted on a ship to capture peripheral images of the ship and combines the peripheral images captured by the imaging cameras to create the bird's eye image as a composite image. The imaging device includes an auxiliary camera adjacent to at least one of the imaging cameras, and a distance calculator that calculates a distance in a lateral direction using the auxiliary camera and the at least one of the imaging cameras adjacent to the auxiliary camera.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: May 3, 2022
    Assignee: YAMAHA HATSUDOKI KABUSHIKI KAISHA
    Inventors: Mitsuaki Kurokawa, Shimpei Fukumoto, Kohei Terada, Hirofumi Amma, Yoshimasa Kinoshita
  • Patent number: 11321043
    Abstract: An identification module receives an identification signal that uniquely identifies an object and captures an image of the object. The identification module determines tag information associated with the object from a unique identification signal associated with the object, and displays, to a user, the tag information overlayed on the image of the individual.
    Type: Grant
    Filed: February 15, 2019
    Date of Patent: May 3, 2022
    Assignee: Red Hat, Inc.
    Inventor: Jon Masters
  • Patent number: 11317015
    Abstract: A focus control device that has an image sensor that receives light flux that has passed through a photographing lens and is capable of generating a phase difference detection signal, and that executes focus adjustment based on the phase difference detection signal, the focus control device comprising a focus detection circuit that detects ranging data representing defocus amount based on phase difference detection signals of a plurality of focus detection regions that have been set in a region of the image sensor where the light flux is received, and a processor that, with ranging data corresponding to a plurality of focus detection regions being arranged in order of short distance, performs focus adjustment based on ranging data remaining after excluding ranging data in a specified range from ranging data representing a closest range.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: April 26, 2022
    Assignee: Olympus Corporation
    Inventor: Shingo Miyazawa
  • Patent number: 11305158
    Abstract: The present disclosure relates to effective, efficient, and economical methods and systems for improving athletic performance by tracking objects typically thrown or hit by athletes. In particular, the present disclosure relates to a unique configuration of technology wherein an electronic display is located at or behind the target and one or more cameras are positioned to observe the target. Once an object is thrown or hit, one or more cameras may observe and track the object. Further, an electronic display may be used to provide targeting information to an athlete and also to determine the athlete's accuracy.
    Type: Grant
    Filed: March 4, 2019
    Date of Patent: April 19, 2022
    Assignee: SmartMitt, LLC
    Inventor: Thomas R. Frenz
  • Patent number: 11309947
    Abstract: The disclosed computer-implemented method may include (1) establishing a directional wireless link between a first computing device and a second computing device in a first direction, (2) exchanging, over the directional wireless link in the first direction, first data between the first computing device and the second computing device, (3) determining, via a sensor of the first computing device, a change to a position or an orientation of the first computing device, (4) redirecting, based on the change, the directional wireless link to a second direction, and (5) exchanging, over the directional wireless link in the second direction, second data between the first computing device and the second computing device. Various other methods and systems are also disclosed.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: April 19, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Reza Tusi, Ohad Meitav
  • Patent number: 11303864
    Abstract: A method and apparatus are provided for calibrating projector alignment using detected image features projected onto a surface, the apparatus comprising a first processor detects features in a single frame of video and a second processor detects features in the corresponding projected image detected by a camera. The second processor matches features of interest in the selected video frame and captured image to correct the projector alignment. The images may be compared using blob detection or corner detection. Unlike prior art ‘off-line’ calibration methods, video frames can be processed continuously (e.g. every frame) or intermittently (e.g. every 10th frame) without interrupting the display of video content.
    Type: Grant
    Filed: September 9, 2020
    Date of Patent: April 12, 2022
    Assignee: CHRISTIE DIGITAL SYSTEMS USA, INC.
    Inventor: Chad Faragher
  • Patent number: 11302029
    Abstract: An information processing apparatus is connected to a first information acquisition apparatus that acquires first information relating to at least one of a position or a pose of a hand of a target person and a second information acquisition apparatus that acquires second information relating to at least one of the position or the pose of the hand of the target person and different from the first information acquired by the first information acquisition apparatus. The information processing apparatus accepts the first information and the second information from the first and second information acquisition apparatuses, respectively, retains the accepted first and second information in an associated relationship with information of timings at which the first and second information acquisition apparatuses acquire the first and second information, respectively, and extracts pieces of the first and second information acquired at a common timing from the retained first and second information as pair information.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: April 12, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Kazuyuki Arimatsu, Yoshinori Ohashi
  • Patent number: 11293749
    Abstract: Reflected light from the measurement object is received by a plurality of pixel columns arranged in an X2 direction in a light receiving unit 121, and a plurality of light receiving amount distributions is output. One or a plurality of peak candidate positions of light receiving amounts in a Z2 direction is detected by a peak detection unit 1 for each pixel column based on the plurality of light receiving amount distributions. A peak position to be adopted to a profile is selected from the peak candidate positions detected for each pixel column based on a relative positional relationship with a peak position of another pixel column adjacent to the pixel column, and profile data indicating the profile is generated by the profile generation unit 3 based on the selected peak position.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: April 5, 2022
    Assignee: KEYENCE CORPORATION
    Inventor: Yoshitaka Tsuchida
  • Patent number: 11295309
    Abstract: Embodiments describe an approach for smart lens-based transactions using eye contact. Embodiments comprise identifying a focus angle of a user wearing a smart lens, determining a smart lens-based transaction is occurring based on the focus angle including a second user or a point of sale sensor for a predetermined amount of time, and verifying the focus angle overlaps with a transacting focus angle of the second user or the point of sale sensor. Additionally, embodiments comprise authenticating the user based on biometric security measures on the smart lens, displaying an augmented reality prompt to the user on the smart lens, wherein the augmented reality prompt on the smart lens prompts the user to select a stored payment method and confirm an amount or file associated with the smart lens-based computing event, and executing the smart lens-based transaction based on the verified overlapping focus angles and the user confirmation.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: April 5, 2022
    Assignee: International Business Machines Corporation
    Inventor: Sarbajit K. Rakshit
  • Patent number: 11288831
    Abstract: An information measuring method is used for measuring installation information of a camera disposed over a plane with an installation height and oriented towards the plane with a shooting angle. The information measuring method includes steps of disposing a first reference point and a second reference point on the plane; measuring a first distance and a second distance between the first reference point, the second reference point and a third reference point; capturing an image including the first reference point, the second reference point and the third reference point; analyzing the image to define a first reference line and a second reference line; determining a first angle and a second angle according to the first reference line, the second reference line and a first normal line; and determining the installation height and/or the shooting angle according to the first distance, the second distance, the first angle and the second angle.
    Type: Grant
    Filed: November 12, 2019
    Date of Patent: March 29, 2022
    Assignee: VIVOTEK INC.
    Inventors: Chao-Tan Huang, Cheng-Chieh Liu
  • Patent number: 11276237
    Abstract: This invention relates to information processing systems and methods in a workplace environment. More particularly, the invention relates to systems and methods for displaying information for use by human users in a workplace environment. Such methods and systems may include an augmented reality mobile device application with voice interactive and other features including user-selectable buttons. Such methods and systems provide rich real-time information to the user via composited media content, overlay imagery, and acoustic speakers. Composited media content may include interactive maps, calendaring functions, navigation information, and tools to assist with management of assignment information. The augmented reality methods and systems facilitate access to various locations and resources at given workplace campus.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: March 15, 2022
    Assignee: PRO UNLIMITED GLOBAL SOLUTIONS, INC.
    Inventors: Ted Sergott, Brad Martin
  • Patent number: 11275288
    Abstract: An adaptive lighting apparatus includes a light source, a spatial light modulator, and processing circuitry. Further, the processing circuitry is configured to drive the spatial light modulator by a modulation signal for irradiating patterns for generating one or more localized illuminations, scan the one or more localized illuminations on the target object based on the patterns, and calculate, in advance, the patterns so that light intensity of the one or more localized illuminations is enhanced on a virtual target located at a predetermined distance and without a scattering medium.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: March 15, 2022
    Assignees: STANLEY ELECTRIC CO., LTD., LINOPTX, LLC
    Inventors: Lin Pang, Tomofumi Yamamuro
  • Patent number: 11270668
    Abstract: A method for detecting an orientation of a screen of a device includes having a two-dimensional (2D) detector array affixed to the device in a fixed orientation relative to the screen, where the 2D detector array includes a sensing area with a plurality of pixels; imaging a scene including a user in a foreground and a background onto the 2D detector array; extracting an information of the scene for each of the plurality of pixels of the sensing area, the information being extracted from the 2D detector array by an image sensor; identifying an asymmetry in a pixelated image of the scene that includes the information of the scene for each of the plurality of pixels of the sensing area; and based on the asymmetry in the image of the scene, determining the orientation of the screen relative to the user.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: March 8, 2022
    Assignee: STMicroelectronics (Research & Development) Limited
    Inventors: Jeffrey M. Raynor, Marek Munko
  • Patent number: 11262187
    Abstract: Disclosed is a digital measurement apparatus including a holding unit on which a measurement object to be measured is placed; and a measurement unit for measuring the measurement object by comparing an image of the measurement object placed on the holding unit with a preset reference value, and providing a reference for adjusting the length of a new measurement object based on a measured value of the measurement object. With this configuration, regardless of photographing conditions, by performing calibration using a set reference value, the accuracy of a measured value may be improved.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: March 1, 2022
    Assignees: B&L BIOTECH, INC.
    Inventors: In Whan Lee, Gi Soo Choi
  • Patent number: 11257237
    Abstract: Disclosed herein are optimized techniques for controlling the exposure time or illumination intensity of a depth sensor. Invalid-depth pixels are identified within a first depth map of an environment. For each invalid-depth pixel, a corresponding image pixel is identified in a depth image that was used to generate the first depth map. Multiple brightness intensities are identified from the depth image. Each brightness intensity is categorized as corresponding to either an overexposed or underexposed image pixel. An increased exposure time or illumination intensity or, alternatively, a decreased exposure time or illumination intensity is then used to capture another depth image of the environment. After a second depth map is generated based on the new depth image, portion(s) of the second depth map are selectively merged with the first depth map by replacing the invalid-depth pixels of the first depth map with corresponding valid-depth pixels of the second depth map.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: February 22, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Bleyer, Yuri Pekelny, Raymond Kirk Price
  • Patent number: 11244432
    Abstract: Image processing methods and systems apply filtering operations to images, wherein the filtering operations use filter costs which are based on image gradients in the images. In this way, image data is filtered for image regions in dependence upon the image gradients for the image regions. This may be useful for different scenarios such as when combining images to form a High Dynamic Range (HDR) image. The filtering operations may be used as part of a connectivity unit which determines connected image regions, and/or the filtering operations may be used as part of a blending unit which blends two or more images together to form a blended image.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: February 8, 2022
    Assignee: Imagination Technologies Limited
    Inventor: Ruan Lakemond
  • Patent number: 11238092
    Abstract: There is provided an image processing device including: a first memory; a first processor that is connected to the first memory; and a storage section that stores image data related to a position information-appended image that is appended with position information relating to an imaging location. The first processor searches for one or more items of the image data including the position information within a predetermined range of a current position of a vehicle, and selects a position information-appended image related to image data found by the searching as an image to be displayed in the vehicle.
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: February 1, 2022
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Yoshinori Yamada, Masaki Ito, Shotaro Inoue, Akihiro Muguruma, Michio Ikeda
  • Patent number: 11236991
    Abstract: A method for determining a current distance and/or a current speed of a target object relative to a motor vehicle based on an image of the target object, in which the image is provided by a camera of the motor vehicle, where characteristic features of the target object are extracted from the image and a reference point associated with the target object is determined based on the characteristic features for determining the distance and/or the speed, wherein the distance and/or the speed are determined based on the reference point, and a baseline is determined in the image based on the characteristic features, which is in a transition area from the depicted target object to a ground surface depicted in the image, and a point located on the baseline is determined as the reference point.
    Type: Grant
    Filed: August 1, 2014
    Date of Patent: February 1, 2022
    Assignee: Connaught Electronics Ltd.
    Inventor: Perikles Rammos
  • Patent number: 11226198
    Abstract: A three-dimensional scanning system includes a projection light source, an image capturing apparatus, and a signal processing apparatus. The projection light source is configured to project a two-dimensional light to a target, where the two-dimensional light has a spatial frequency. The image capturing apparatus captures an image of the target illuminated with the two-dimensional light. The signal processing apparatus is coupled to the projection light source and the image capturing apparatus, to analyze a definition of the image of the two-dimensional light, where if the definition of the image of the two-dimensional light is lower than a requirement standard, the spatial frequency of the two-dimensional light is reduced.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: January 18, 2022
    Assignee: BENANO INC.
    Inventors: Liang-Pin Yu, Yeong-Feng Wang, Chun-Di Chen, Yun-Ping Kuan
  • Patent number: 11204236
    Abstract: Disclosed is a digital measurement apparatus including a holding unit on which a measurement object to be measured is placed; and a measurement unit for measuring the measurement object by comparing an image of the measurement object placed on the holding unit with a preset reference value, and providing a reference for adjusting the length of a new measurement object based on a measured value of the measurement object. With this configuration, regardless of photographing conditions, by performing calibration using a set reference value, the accuracy of a measured value may be improved.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: December 21, 2021
    Assignees: B&L BIOTECH, INC.
    Inventors: In Whan Lee, Gi Soo Choi
  • Patent number: 11206384
    Abstract: Depth information can be used to assist with image processing functionality, such as image stabilization and blur reduction. In at least some embodiments, depth information obtained from stereo imaging or distance sensing, for example, can be used to determine a foreground object and background object(s) for an image or frame of video. The foreground object then can be located in later frames of video or subsequent images. Small offsets of the foreground object can be determined, and the offset accounted for by adjusting the subsequent frames or images. Such an approach provides image stabilization for at least a foreground object, while providing simplified processing and reduce power consumption. Similarly processes can be used to reduce blur for an identified foreground object in a series of images, where the blur of the identified object is analyzed.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: December 21, 2021
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventor: Dong Zhou
  • Patent number: 11205278
    Abstract: The present disclosure provides a depth image processing method and apparatus, and an electronic device. The method includes: acquiring a first image acquired by a depth sensor and a second image acquired by an image sensor; determining a scene type according to the first image and the second image; and performing a filtering process on the first image according to the scene type.
    Type: Grant
    Filed: May 13, 2020
    Date of Patent: December 21, 2021
    Assignee: SHENZHEN HEYTAP TECHNOLOGY CORP., LTD.
    Inventor: Jian Kang
  • Patent number: 11191430
    Abstract: A Bowman's Refractive Index (BRI) for quantification of microdistortions in Bowman's Layer (BL) after Small Incision Lenticule Extraction (SMILE) is defined for a patient. BRI is summation of one or more areas of the OCT image of anterior edge of Bowman's layer, quantifies the smoothness of the Bowman's layer. The anterior edge of Bowman's layer is segmented into pixels. After segmentation, a 3rd order polynomial is curve fit to the segmented pixels of the edge of Bowman's layer. BRI is calculated by segmentation of the 3-Dimensional (3-D) OCT image. BRI acts as a marker for mechanical stability and is useful for diagnosis of disease and prognosis of treatments in human.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: December 7, 2021
    Assignee: Narayana Nethralaya Foundation
    Inventors: Abhijit Sinha Roy, Rohit Shetty, Bhujang Shetty
  • Patent number: 11184517
    Abstract: A system of cameras can be used to generate a complete field of view within an edge network. For example, one camera can have a field of view where part of the view is obstructed by an object and/or by the camera's orientation. Yet another camera that is a part of the edge network can supplement the view of the first camera by providing an alternate view. The two views can be stitched together by a coordinate system such that a complete field of view can be utilized by both cameras. Additionally, where a field of view is not available by any stationary camera, the system can dispatch a mobile camera to help supplement the views.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: November 23, 2021
    Assignees: AT&T Intellectual Property I, L.P., AT&T MOBILITY II LLC
    Inventors: Zhi Cui, Sangar Dowlatkhah, Nigel Bradley, Ari Craine, Robert Koch
  • Patent number: 11178434
    Abstract: Reception-side processing performed in a case where transmission of standard dynamic range video data and transmission of high dynamic range video data coexist in a time sequence is simplified. SDR transmission video data is converted into SDR transmission video data through dynamic range conversion. The SDR transmission video data is the one obtained by performing, on SDR video data, photoelectric conversion in accordance with an SDR photoelectric conversion characteristic. In this case, the conversion is performed on the basis of conversion information for converting a value of conversion data in accordance with the SDR photoelectric conversion characteristic into a value of conversion data in accordance with an HDR photoelectric conversion characteristic. A video stream is obtained by performing encoding processing on HDR transmission video data. A container having a predetermined format and including this video stream is transmitted.
    Type: Grant
    Filed: July 8, 2019
    Date of Patent: November 16, 2021
    Assignee: SONY CORPORATION
    Inventor: Ikuo Tsukagoshi
  • Patent number: 11176703
    Abstract: A system uses a fleet of AVs to assess visibility of target objects. Each AV has a camera for capturing images of target objects. AVs provide the captured images, or visibility data derived from the captured images, to a remote system, which aggregates visibility data describing images captured across the fleet of AVs. The AVs also provide condition data describing conditions under which the images were captured, and the remote system aggregates the condition data. The remote system processes the aggregated visibility data and condition data to determine conditions under which a target object does not meet a visibility threshold.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: November 16, 2021
    Assignee: GM Cruise Holdings LLC
    Inventors: Katherine Mary Stumpf, Andrew David Acosta
  • Patent number: 11176374
    Abstract: The described implementations relate to images and depth information and generating useful information from the images and depth information. One example can identify planes in a semantically-labeled 3D voxel representation of a scene. The example can infer missing information by extending planes associated with structural elements of the scene. The example can also generate a watertight manifold representation of the scene at least in part from the inferred missing information.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: November 16, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michelle Brook, William Guyman, Szymon P. Stachniak, Hendrik M. Langerak, Silvano Galliani, Marc Pollefeys
  • Patent number: 11176645
    Abstract: An apparatus is configured to convert a resolution of each of a plurality of images acquired by imaging an object under a plurality of geometric conditions based on an imaging position and a position of a light source that irradiates the object with light. The apparatus includes a determination unit configured to determine a resolution at which a number of peaks is one regarding a peak of a pixel value that emerges in a corresponding relationship between the pixel value and a geometric condition at each of pixel positions in the plurality of images, and a conversion unit configured to convert the resolution of each of the plurality of images into the determined resolution.
    Type: Grant
    Filed: October 23, 2019
    Date of Patent: November 16, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Sho Ikemoto
  • Patent number: 11170517
    Abstract: A method for ascertaining a distance between a vehicle and a projection surface, onto which a characteristic light pattern is projected using a headlight of the vehicle, includes detecting, in an image of the characteristic light pattern captured by an image capturing unit, a characteristic structure produced by a first light-producing unit by evaluating a geometric location relationship in the captured image between the trajectory and characteristic structures of a characteristic light pattern that are located in an environment along the trajectory; calculating a point on the ray path that is correlated with a position of the detected characteristic structure on the trajectory in accordance with the transformation rule; and calculating the distance between the vehicle and the projection surface from the calculated point on the ray path.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: November 9, 2021
    Assignee: Dr. Ing. h.c. F. Porsche Aktiengesellschaft
    Inventors: Christian Schneider, Constantin Haas, Tim Kunz
  • Patent number: 11170191
    Abstract: A code reader for the reading of optical codes is provided that has an image sensor for the detection of image data with the code and that has a control and evaluation unit that is configured to read the code with at least one decoding method, wherein the control and evaluation unit is connected to a distance sensor that determines a distance value for the distance of the code. The control and evaluation unit is further configured for the purpose of setting at least one parameter and/or including at least one additional algorithm of the decoding method for the decoding method in dependence on the distance value.
    Type: Grant
    Filed: October 20, 2020
    Date of Patent: November 9, 2021
    Assignee: SICK AG
    Inventors: Romain Müller, Dirk Strohmeier, Pascal Schüler, Marcel Hampf
  • Patent number: 11151741
    Abstract: A method for assisting obstacle avoidance of a mobile platform includes determining to use a detection mode from a plurality of detection modes, detecting a characteristic condition of the mobile platform with respect to an obstacle using the detection mode, and directing the mobile platform to avoid the obstacle based on the detected characteristic condition.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: October 19, 2021
    Assignee: SZ DJI TECHNOLOGY CO., LTD.
    Inventors: Honghui Zhang, Lei Han, Xiao Hu
  • Patent number: 11143879
    Abstract: A method for semi-dense depth estimation includes receiving, at an electronic device, a control signal of a speckle pattern projector (SPP and receiving from each sensor of a dynamic vision sensor (DVS) stereo pair, an event stream of pixel intensity change data, wherein the event stream is time-synchronized with the control signal of the SPP. The method further includes performing projected light filtering on the event stream of pixel intensity change data for each sensor of the DVS stereo pair, to generate synthesized event image data, the synthesized event image data having one or more channels, each channel based on an isolated portion of the event stream of pixel intensity change data and performing stereo matching on at least one channel of the synthesized event image data for each sensor of the DVS stereo pair to generate a depth map for at least a portion of the field of view.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: October 12, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Michael Sapienza, Ankur Gupta, Abhijit Bendale
  • Patent number: 11138751
    Abstract: System, methods, and other embodiments described herein relate to training a depth model for monocular depth estimation. In one embodiment, a method includes generating, as part of training the depth model according to a supervised training stage, a depth map from a first image of a pair of training images using the depth model. The pair of training images are separate frames depicting a scene from a monocular video. The method includes generating a transformation from the first image and a second image of the pair using a pose model. The method includes computing a supervised loss based, at least in part, on reprojecting the depth map and training depth data onto an image space of the second image according to at least the transformation. The method includes updating the depth model and the pose model according to at least the supervised loss.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: October 5, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Vitor Guizilini, Sudeep Pillai, Rares A. Ambrus, Jie Li, Adrien David Gaidon
  • Patent number: 11120118
    Abstract: Examples of techniques for location validation for authentication are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes presenting, by a processing device, a location-based security challenge to a user. The method further includes responsive to presenting the location-based security challenge to the user, receiving, by the processing device, media from the user. The method further includes validating, by the processing device, the media received from the user against the location-based security challenge to determine whether the user is located at an authorized location. The method further includes responsive to determining that the user is located at an authorized location, authenticating, by the processing device, the user to grant access for the user to a resource.
    Type: Grant
    Filed: November 22, 2017
    Date of Patent: September 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mark E. Maresh, Colm Nolan, Juan F. Vargas, Michael J. Whitney
  • Patent number: 11117535
    Abstract: Aspects of the present disclosure involve projecting an interactive scene onto a surface from a projecting object. In one particular embodiment, the interactive scene is projected from a vehicle and may be utilized by the vehicle to provide a scene or image that a user may interact with through various gestures detected by the system. In addition, the interactive scene may be customized to one or more preferences determined by the system, such as user preferences, system preferences, or preferences obtained through feedback from similar systems. Based on one or more user inputs (such as user gestures received at the system), the projected scene may be altered or new scenes may be projected. In addition, control over some aspects of the vehicle (such as unlocking of doors, starting of the motor, etc.) may be controlled through the interactive scene and the detected gestures of the users.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: September 14, 2021
    Inventors: Daniel E. Potter, Bivin J. Varghese, Christopher P. Child, Mira S. Misra, Clarisse Mazuir, Malcolm J. Northcott, Albert J. Golko, Daniel J. Reetz, Matthew E. Last, Thaddeus Stefanov-Wagner, Christopher J. Sataline, Michael A. Cretella, Collin J. Palmer
  • Patent number: 11112531
    Abstract: Provided is a method of creating a longitudinal section along an arbitrary line from three-dimensional point group data of terrain or a structure, and a survey data processing device and a survey system for the same. The method includes (a): setting an arbitrary longitudinal section creation line by sequentially designating a plurality of interval designation points on an X-Y plane of three-dimensional point group data (X, Y, Z), (b): projecting a Z point surveyed between a start point and an end point of a certain interval among a plurality of intervals defined by the interval designation points, onto a vertical virtual plane including the longitudinal section creation line, corresponding to (X, Y) coordinates of the longitudinal section creation line, and (c): performing the step (b) for all of the intervals.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: September 7, 2021
    Assignee: TOPCON CORPORATION
    Inventor: Ryosuke Miyoshi
  • Patent number: 11113548
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating object detection predictions from a neural network. In some implementations, an input characterizing a first region of an environment is obtained. The input includes a projected laser image generated from a three-dimensional laser sensor reading of the first region, a camera image patch generated from a camera image of the first region, and a feature vector of features characterizing the first region. The input is processed using a high precision object detection neural network to generate a respective object score for each object category in a first set of one or more object categories. Each object score represents a respective likelihood that an object belonging to the object category is located in the first region of the environment.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: September 7, 2021
    Assignee: Waymo LLC
    Inventors: Zhaoyin Jia, Ury Zhilinsky, Yun Jiang, Yimeng Zhang
  • Patent number: 11111785
    Abstract: A method and a device for acquiring three-dimensional coordinates of ore based on mining process are disclosed. The method includes: obtaining a two-dimensional coordinate of the ore by using a YOLACT algorithm and a NMS algorithm to obtain a prediction mask map, obtaining depth information of the ore based on the color map and the infrared depth map, and combining the two-dimensional coordinate with the depth information to obtain a three-dimensional coordinate of the ore.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: September 7, 2021
    Assignee: Wuyi University
    Inventors: Junying Zeng, Xuhua Li, Chuanbo Qin, Kaitian Wei, Fan Wang, Xiaowei Jiang, Weizhao He, Junhua He
  • Patent number: 11107235
    Abstract: Systems and methods for identifying data suitable for mapping are provided. In some aspects, the method includes receiving one or more images acquired in an area of interest, and selecting at least two ground control points within a field of view of the one or more images. The method also includes determining perceived locations for the at least two ground control points using the one or more images, and computing pairwise distances between the perceived locations and predetermined locations of the at least two ground control points. The method further includes comparing corresponding pairwise distances to identify differences therebetween, and determining a suitability of the one or more images for mapping based on the comparison.
    Type: Grant
    Filed: February 27, 2020
    Date of Patent: August 31, 2021
    Assignee: HERE Global B.V.
    Inventors: Jeff Connell, Anish Mittal, David Johnston Lawlor
  • Patent number: 11100344
    Abstract: Systems and methods to perform image-based three-dimensional (3D) lane detection involve obtaining known 3D points of one or more lane markings in an image including the one or more lane markings. The method includes overlaying a grid of anchor points on the image. Each of the anchor points is a center of i concentric circles. The method also includes generating an i-length vector and setting an indicator value for each of the anchor points based on the known 3D points as part of a training process of a neural network, and using the neural network to obtain 3D points of one or more lane markings in a second image.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: August 24, 2021
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Dan Levi, Noa Garnett
  • Patent number: 11100662
    Abstract: According to one embodiment, an image processing apparatus includes a memory and one or more hardware processors electrically coupled to the memory. The one or more hardware processors acquire a first image of an object including a first shaped blur and a second image of the object including a second shaped blur. The first image and the second image are acquired by capturing at a time through a single image-forming optical system. The one or more hardware processors acquire distance information to the object based on the first image and the second image, with a statistical model that has learnt previously.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: August 24, 2021
    Assignee: KABUSHIKI KAISHA TOSHIBA
    Inventors: Nao Mishima, Takayuki Sasaki
  • Patent number: 11087036
    Abstract: A method, device and system for automatically deriving stationing zones for an electronic measuring or marking device in a worksite environment. The method includes querying a database (DB) for a construction plan information for the worksite environment and acquiring a worksite-task-information of a worksite-task to be executed. The worksite-task-information includes spatial points in the construction plan which have to be measured and/or marked to accomplish the worksite-task. It also comprises an acquiring of at least coarse 3D-data of the actual real world worksite environment, and a merging of the at least coarse 3D-data and the construction plan information to form an actual state model of the worksite environment. An automatic calculating of at least one stationing zone within the actual state model is established, the stationing zone including at least one stationing location from which the measuring or marking of the spatial points are accessible by the device without obstructions.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: August 10, 2021
    Assignee: LEICA GEOSYSTEMS AG
    Inventors: Bernd Möller, Thomas Ammer
  • Patent number: 11086015
    Abstract: A system of generating a three-dimensional (3D) scan of an environment includes multiple 3D scanners including a first 3D scanner at respective first and second positions. The system further includes a controller coupled to the 3D scanners. The first 3D scanner acquires a first set of 3D coordinates, the first set of 3D coordinates having a first portion. The second 3D scanner acquires a second set of 3D coordinates, the second set of 3D coordinates having a second portion. The first portion and the second portion are simultaneously transmitted to the controller by the first 3D scanner and the second 3D scanner respectively, while the first set of 3D coordinates and the second set of 3D coordinates are being acquired. The controller registers the first portion and the second portion to each other while the first set of 3D coordinates and the second set of 3D coordinates are being acquired.
    Type: Grant
    Filed: June 23, 2020
    Date of Patent: August 10, 2021
    Assignee: FARO TECHNOLOGIES, INC.
    Inventors: Daniel Pompe, Manuel Caputo, José Gerardo Gómez Méndez, Zia ul Azam, Louis Bergmann, Daniel Flohr, Oliver Zweigle
  • Patent number: 11080536
    Abstract: An image processing device is provided with a communication device and a processor. The processor is configured to acquire a first video obtained by imaging outside scenery of a first vehicle, when the processor detects that a second vehicle appears on the first video, implement image processing that degrades visibility of a video with respect to a first image area corresponding to at least a part of the second vehicle on the first video, when the processor detects that the second vehicle appears on the first video and then a specific part of the second vehicle appears on the first video, end the image processing with respect to the first image area and implement image processing that degrades visibility of a video with respect to a second image area corresponding to the specific part of the second vehicle on the first video.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: August 3, 2021
    Assignee: Toyota Jidosha Kabushiki Kaisha
    Inventor: Kazuya Nishimura
  • Patent number: 11069049
    Abstract: A division line detection device includes: a processor configured to detect a division line candidate pixel from an image acquired by a camera mounted on a vehicle, set first reliability for a division line candidate pixel whose value representing likelihood that a lane division line is represented is equal to or more than a predetermined threshold value; set second reliability lower than the first reliability for a division line candidate pixel whose value is less than the predetermined threshold value; correct to the first reliability, when a first predetermined number or more of the division line candidate pixels are located on a first scan line having one end at a vanishing point of the image, the second reliability of each division line candidate pixel on the first scan line; and detect a lane division line based on the division line candidate pixels having the first reliability.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: July 20, 2021
    Assignees: TOYOTA JIDOSHA KABUSHIKI KAISHA, DENSO CORPORATION
    Inventors: Takahiro Sota, Jia Sun, Masataka Yokota