Range Or Distance Measuring Patents (Class 382/106)
-
Patent number: 11016176Abstract: A method, device and system for generating a transformation function, mapping position detections of objects in a scene, detected with a positioning device, to a graphical representation of the scene. The teachings enable the detected positions by the positioning device to be mapped to the graphical representation of the monitored scene without the need of previous references to geographical coordinate systems for the positioning device and the graphical representation of the scene. Virtually any type of image may hence be used as a graphical representation of the scene, even hand-drawn sketches.Type: GrantFiled: April 4, 2018Date of Patent: May 25, 2021Assignee: Axis ABInventors: Aras Papadelis, Peter Henricsson, Mikael Göransson
-
Patent number: 11002538Abstract: A distance measurement device includes a first acquisition unit configured to acquire distance information on the basis of a plurality of images captured at different viewpoints, a second acquisition configured to acquire correction information of the distance information on the basis of a plurality of images captured at a timing different from the plurality of images used by the first acquisition unit, and a correction unit configured to correct the distance information on the basis of the correction information.Type: GrantFiled: October 18, 2018Date of Patent: May 11, 2021Assignee: CANON KABUSHIKI KAISHAInventor: Kazuya Nobayashi
-
Patent number: 10991106Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for on-shelf merchandise detection are provided. One of the methods includes: obtaining a plurality of depth images associated with a shelf from a first camera; obtaining a plurality of images from one or more second cameras associated with each of a plurality of tiers of the shelf; detecting motions of a user's hand comprising reaching into and moving away from the shelf; determining one of the tiers of the shelf associated with the detected motions, a first point of time associated with reaching into the shelf, and a second point of time associated with moving away from the shelf; identifying a first image captured before the first point in time and a second image captured after the second point in time; and comparing the first image and the second image to determine one or more changes to merchandise.Type: GrantFiled: May 15, 2020Date of Patent: April 27, 2021Assignee: ADVANCED NEW TECHNOLOGIES CO., LTD.Inventors: Xiaobo Zhang, Zhangjun Hou, Xudong Yang, Xiaodong Zeng
-
Patent number: 10983530Abstract: The present disclosure discloses method and an Electronic Control Unit (ECU) (101) of autonomous vehicle for determining an accurate position. The ECU (101) determines centroid coordinate from Global Positioning System (GPS) points, relative to autonomous vehicle and identifies approximate location and orientation of vehicle on pre-generated map based on centroid coordinate and Inertial Measurement Unit (IMU) data. Distance and direction of surrounding static infrastructure is identified from location and orientation of autonomous vehicle based on road boundaries analysis and data associated with objects adjacent to autonomous vehicle. A plurality of lidar reflection reference points are identified within distance and direction of static infrastructure based on heading direction of autonomous vehicle. Position of lidar reflection reference points are detected from iteratively selected shift positions from centroid coordinate.Type: GrantFiled: December 17, 2018Date of Patent: April 20, 2021Assignee: Wipro LimitedInventors: Manas Sarkar, Balaji Sunil Kumar
-
Patent number: 10984235Abstract: Sensing of scene-based occurrences is disclosed. In one example, a vision sensor system comprises (1) dedicated computer vision (CV) computation hardware configured to receive sensor data from at least one sensor array and capable of computing CV features using readings from multiple neighboring sensor pixels and (2) a first processing unit communicatively coupled with the dedicated CV computation hardware. The vision sensor system is configured to, in response to processing of the one or more computed CV features indicating a presence of one or more irises in a scene captured by the at least one sensor array, generate data in support of iris-related operations to be performed by a second processing unit and send the generated data to the second processing unit.Type: GrantFiled: September 22, 2017Date of Patent: April 20, 2021Assignee: QUALCOMM IncorporatedInventors: Evgeni Gousev, Liang Shen, Victor Chan, Edwin Chongwoo Park, Xiaopeng Zhang
-
Patent number: 10977821Abstract: In one or more embodiments, a system for calibration between a camera and a ranging sensor comprises a ranging sensor to obtain ranging measurements for a target located at N number of locations with an emitter at M number of rotation positions. The system further comprises a camera to image the target to generate imaging measurements corresponding to the ranging measurements.Type: GrantFiled: June 12, 2019Date of Patent: April 13, 2021Assignee: The Boeing CompanyInventors: Michael B. Schwiesow, Anthony W. Baker
-
Patent number: 10977824Abstract: A positioning method and a positioning device are provided. The positioning method includes providing a map database, and obtaining a target image and querying the map database with the target image so as to determine a target coordinate corresponding to the target image.Type: GrantFiled: May 29, 2019Date of Patent: April 13, 2021Assignee: ACER INCORPORATEDInventor: Chia-Cheng Teng
-
Patent number: 10979648Abstract: A method and a stereoscopic apparatus configured to determine an exposure time period for capturing images. The apparatus comprises at least one image capturing device for capturing pairs of images; a processor configured to: calculate a texture-signal-to-noise ratio (TSNR) metric based on information derived from a pair of captured images; calculate an image saturation metric based on that pair of captured images; calculate a value for an exposure duration that will be implemented by the at least one image capturing device when another pair of images are captured; provide the value of the calculated exposure time period to each at least one image capturing device; and wherein the at least one image capturing device is configured to capture at least one image of the target while implementing each the respective calculated value of the exposure time period provided to the corresponding one of the at least one image capturing device.Type: GrantFiled: November 5, 2019Date of Patent: April 13, 2021Assignee: INUITIVE LTD.Inventors: Gilad Adler, Yaron Rashi
-
Patent number: 10967707Abstract: An automatic ventilation system and method for vehicle are disclosed. The system includes: a ventilator to control a ventilation operation of an indoor space of a second vehicle; an input unit to receive information of automatic ventilation for a first vehicle, the automatic ventilation automatically performed based on a change in CO2 concentration inside the first vehicle; and a controller to determine an initiation time point of an automatic ventilation operation of the second vehicle based on the received automatic ventilation information and to control the ventilator to ventilate the indoor space of the second vehicle at the determined initiation time point, thereby pleasantly ventilating an indoor space of the second vehicle with reference to data related to an automatic ventilation operation of first vehicle including a CO2 sensor.Type: GrantFiled: September 14, 2017Date of Patent: April 6, 2021Assignees: HYUNDAI MOTOR COMPANY, KIA MOTORS CORPORATIONInventors: Jang Yong Lee, Mi Seon Kim, Wan Lee, Kang Ju Cha
-
Patent number: 10967506Abstract: Example methods and systems for determining 3D scene geometry by projecting patterns of light onto a scene are provided. In an example method, a first projector may project a first random texture pattern having a first wavelength and a second projector may project a second random texture pattern having a second wavelength. A computing device may receive sensor data that is indicative of an environment as perceived from a first viewpoint of a first optical sensor and a second viewpoint of a second optical sensor. Based on the received sensor data, the computing device may determine corresponding features between sensor data associated with the first viewpoint and sensor data associated with the second viewpoint. And based on the determined corresponding features, the computing device may determine an output including a virtual representation of the environment that includes depth measurements indicative of distances to at least one object.Type: GrantFiled: November 30, 2017Date of Patent: April 6, 2021Assignee: X Development LLCInventors: Gary Bradski, Kurt Konolige, Ethan Rublee
-
Patent number: 10964040Abstract: An operating method includes generating a first depth map by at least a first depth capture device, generating a second depth map by at least a second depth capture device, performing image registration on the first depth map with the second depth map to obtain transformed coordinates in the second depth map corresponding to pixels in the first depth map, and aligning depth data of the first depth map and depth data of the second depth map to generate an optimized depth map according to the transformed coordinates in the second depth map corresponding to the pixels in the first depth map.Type: GrantFiled: December 19, 2018Date of Patent: March 30, 2021Assignee: ArcSoft Corporation LimitedInventors: Hao Sun, Jian-Hua Lin, Chung-Yang Lin
-
Patent number: 10964042Abstract: A detection device including: a detector that detects an object from one viewpoint; a reliability calculator that calculates reliability information on the object at the one viewpoint by using a detection result of the detector; and an information calculator that calculates shape information on the object at the one viewpoint by using the detection result of the detector and the reliability information and calculates texture information on the object at the one viewpoint by using the detection result, the information calculator generates model information on the object at the one viewpoint based on the shape information and the texture information.Type: GrantFiled: February 26, 2016Date of Patent: March 30, 2021Assignee: NIKON CORPORATIONInventors: Takeaki Sugimura, Yoshihiro Nakagawa
-
Patent number: 10939043Abstract: An image pickup apparatus capable of performing a zooming function without increasing a thickness of the image pickup apparatus, and obtaining depth information at the same time by using lens elements having different diameters, is provided. The image pickup apparatus includes lens elements, and image pickup regions respectively disposed in correspondence to the lens elements. At least two of the lens elements have different diameters. At least two of the image pickup regions have different sizes. A smallest image pickup region among the image pickup regions having a smallest size among sizes of the image pickup regions is disposed with respect to a lens element among the lens elements having a largest diameter. A largest image pickup region among the image pickup regions having a largest size among sizes of the image pickup regions is disposed with respect to a lens element among the lens elements having a smallest diameter.Type: GrantFiled: September 26, 2018Date of Patent: March 2, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventor: Kyong-tae Park
-
Patent number: 10930000Abstract: A method includes obtaining a disparity map based on stereoscopic image frames captured by stereoscopic cameras borne on a movable platform, determining a plurality of continuous regions in the disparity map that each includes a plurality of elements having disparity values within a predefined range, identifying a continuous sub-region including one or more elements having a highest disparity value among the elements within each continuous region as an object, and determining a distance between the object and the movable platform using at least the highest disparity value.Type: GrantFiled: May 30, 2019Date of Patent: February 23, 2021Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Ang Liu, Pu Xu
-
Patent number: 10927515Abstract: A self-propelled milling machine includes a controller which continuously locates an alterable position of a loading surface and of a slewable transport conveyor relative to a machine frame, or the position of the loading surface relative to the transport conveyor, and automatically controls one or more of the slewing angle, the elevation angle and the conveying speed of the transport conveyor, wherein discharged milling material impinges on pre-calculated points of impingement within the loading surface. The controller determines correction factors for the control parameter(s) as a function of a transverse inclination about the longitudinal central axis of the loading surface, a position angle between the longitudinal central axis of the loading surface and the longitudinal central axis of the transport conveyor or that of the machine frame, and/or the position of the pre-calculated point of impingement relative to an end of the loading surface lying on the longitudinal central axis.Type: GrantFiled: November 15, 2018Date of Patent: February 23, 2021Inventors: Cyrus Barimani, Christian Berning, Tobias Krista, Bernd Walterscheid
-
Patent number: 10930059Abstract: A method and an apparatus for processing a 3D scene are disclosed. A reference image representative of an image of the scene captured under ambient lighting is determined. A texture-free map is determined from said reference image and an input image of the scene. The 3D scene is then processed using the determined texture-free map.Type: GrantFiled: April 22, 2019Date of Patent: February 23, 2021Assignee: INTERDIGITAL CE PATENT HOLDINGS, SASInventors: Salma Jiddi, Gregoire Nieto, Philippe Robert
-
Patent number: 10917627Abstract: A method and system for capturing and generating a 3 dimensional image of a target using a single camera. The system has at least one light source which intensity is able to be adjusted, at least one image recorder that captures images of a target that has a plurality of obstacles, and at least one control unit not shown in figures, that controls the light source by increasing or decreasing its intensity within a time period, that controls the image recorder so as to capture a plurality of images of said target within said time period, and that determines the depth of the obstacles so as to capture and generate a 3 dimensional image by comparing the illumination level change of the obstacles between captured images within the time period.Type: GrantFiled: June 3, 2016Date of Patent: February 9, 2021Inventor: Utku Buyuksahin
-
Patent number: 10915783Abstract: An imaging device may capture images of a scene, where the scene includes retroreflective materials. Where visual images and depth images are captured from a scene, and the depth images have ratios of supersaturated pixels that are less than a predetermined threshold, a location map of the scene is generated or updated based on the depth images. Where the ratios are greater than the predetermined threshold, the location map of the scene is generated or updated based on the visual images. Additionally, where each of a plurality of imaging devices detect concentrations of supersaturated pixels beyond a predetermined threshold or limit within their respective fields of view, an actor present on the scene may be determined to be wearing retroreflective material, or otherwise designated as a source of the supersaturation, and tracked with the scene based on coverage areas that are determined to have excessive ratios of supersaturated pixels.Type: GrantFiled: December 14, 2018Date of Patent: February 9, 2021Assignee: Amazon Technologies, Inc.Inventors: Samuel Nathan Hallman, Petko Tsonev, Michael Francis O'Malley, Jayakrishnan Eledath, Jue Wang, Tian Lan
-
Patent number: 10909395Abstract: An object detection apparatus is provided with: an imager configured to image surroundings of a subject vehicle and to obtain a surrounding image; an object detector configured to detect an interested-object from the surrounding image and to output first image coordinates, which indicate a position of the detected interested-object on the surrounding image; a calculator configured to associate the interested-object with one or more coordinate points out of a plurality of coordinate points, each of which indicates three-dimensional coordinates of respective one of a plurality of points on a road, on the basis of the first image coordinates and a position of the subject vehicle, and configured to calculate at least one of a position of the interested-object on a real space and a distance to the interested-object from the subject vehicle on the basis of the position of the subject vehicle and the one or more coordinate points associated.Type: GrantFiled: April 5, 2018Date of Patent: February 2, 2021Assignee: Toyota Jidosha Kabushiki KaishaInventor: Mineki Soga
-
Patent number: 10891481Abstract: Automated detection of features and/or parameters within an ocean environment using image data. In an embodiment, captured image data is received from ocean-facing camera(s) that are positioned to capture a region of an ocean environment. Feature(s) are identified within the captured image data, and parameter(s) are measured based on the identified feature(s). Then, when a request for data is received from a user system, the requested data is generated based on the parameter(s) and sent to the user system.Type: GrantFiled: August 26, 2019Date of Patent: January 12, 2021Assignee: SURFLINEWAVETRAK, INC.Inventor: Benjamin Freeston
-
Patent number: 10885761Abstract: In an example, an apparatus includes a first sensor, a second sensor, and an integrated management system. The first sensor is for capturing a first set images of a calibration target that is placed in a monitored site, wherein the first sensor has a first position in the monitored site, and wherein a physical appearance of the calibration target varies when viewed from different positions within the monitored site. The second sensor is for capturing a second set of images of the calibration target, wherein the second sensor has a second position in the monitored site that is different from the first position. The integrated management system is for determining a positional relationship of the first sensor and the second sensor based on the first set of images, the second set of images, and knowledge of the physical appearance of the calibration target.Type: GrantFiled: October 3, 2018Date of Patent: January 5, 2021Assignee: Magik Eye Inc.Inventor: Akiteru Kimura
-
Patent number: 10885662Abstract: This application provides a depth-map-based ground detection method and apparatus. The method includes: screening first sample points according to depth values of points in a current depth map; determining space coordinates of the first sample points, and determining space heights of the first sample points according to current gravity acceleration information and the space coordinates; screening second sample points in the first sample points according to the space heights; determining an optimal ground equation according to space coordinates of the second sample points; and determining a ground point in the depth map by using the optimal ground equation. Because the accuracy of the sample points that undergo secondary screening is high, the ground detection precision is high. Because this application is not excessively dependent on a depth value, the calculation complexity is low, and operation can be performed on various hardware.Type: GrantFiled: January 15, 2019Date of Patent: January 5, 2021Assignee: Beijing Hjimi Technology Co., LtdInventors: Hang Wang, Zan Sheng, Shuo Li, Xiaojun Zhou, Li Li
-
Patent number: 10877155Abstract: A survey data processing device includes a panoramic image data receiving unit, a point cloud data receiving unit, a similar part designation receiving unit, and a correspondence relationship determining unit. The panoramic image data receiving unit receives first and second panoramic images that are respectively obtained at a first point of view and a second point of view. The point cloud data receiving unit receives first point cloud data that is obtained by a first laser scanner and receives second point cloud data that is obtained by a second laser scanner. The similar part designation receiving unit receives designation of a part that is the same or similar between the first and second panoramic images. The correspondence relationship determining unit determines a correspondence relationship between the first and second point cloud data on the basis of the first and second point cloud data corresponding to the same or similar part.Type: GrantFiled: September 18, 2019Date of Patent: December 29, 2020Assignee: TOPCON CORPORATIONInventors: Daisuke Sasaki, Takahiro Komeichi
-
Patent number: 10878621Abstract: Exemplary embodiments of the present disclosure provide a method, apparatus and computer readable storage medium for creating a map and positioning a moving entity. A method for creating a map includes acquiring an image acquired when an acquisition entity is moving and location data and point cloud data associated with the image, the location data indicating a location where the acquisition entity is located when the image is acquired, the point cloud data indicating three-dimensional information of the image. The method further includes generating a first element in a global feature layer of the map based on the image and the location data. The method further includes generating a second element in a local feature layer of the map based on the image and the point cloud data, the first element corresponding to the second element.Type: GrantFiled: December 27, 2018Date of Patent: December 29, 2020Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.Inventors: Wang Zhou, Miao Yan, Yifei Zhan, Xiong Duan, Changjie Ma, Xianpeng Lang, Yonggang Jin
-
Patent number: 10878590Abstract: Stereo image reconstruction can be achieved by fusing a plurality of proposal cost volumes computed from a pair of stereo images, using a predictive model operating on pixelwise feature vectors that include disparity and cost values sparsely sampled form the proposal cost volumes to compute disparity estimates for the pixels within the image.Type: GrantFiled: May 25, 2018Date of Patent: December 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Sudipta Narayan Sinha, Marc André Léon Pollefeys, Johannes Lutz Schönberger
-
Patent number: 10874948Abstract: A method of mapping a virtual environment includes: obtaining a first sequence of video images output by a videogame title; obtaining a corresponding sequence of in-game virtual camera positions at which the video images were created; obtaining a corresponding sequence of depth buffer values for a depth buffer used by the videogame whilst creating the video images; and, for each of a plurality of video images and corresponding depth buffer values of the obtained sequences, obtain mapping points corresponding to a selected predetermined set of depth values corresponding to a predetermined set of positions within a respective video image; where for each pair of depth values and video image positions, a mapping point has a distance from the virtual camera position based upon the depth value, and a position based upon the relative positions of the virtual camera and the respective video image position, thereby obtaining a map dataset of mapping points corresponding to the first sequence of video images.Type: GrantFiled: July 2, 2019Date of Patent: December 29, 2020Assignee: Sony Interactive Entertainment Europe LimitedInventors: Nicholas Anthony Edward Ryan, Hugh Alexander Dinsdale Spencer, Andrew Swann, Simon Andrew St John Brislin, Pritpal Singh Panesar
-
Patent number: 10861165Abstract: A method to identify one or more depth-image segments that correspond to a predetermined object type is enacted in a depth-imaging controller operatively coupled to an optical time-of-flight (ToF) camera; it comprises: receiving depth-image data from the optical ToF camera, the depth-image data exhibiting an aliasing uncertainty, such that a coordinate (X, Y) of the depth-image data maps to a periodic series of depth values {Zk}; and labeling, as corresponding to the object type, one or more coordinates of the depth-image data exhibiting the aliasing uncertainty.Type: GrantFiled: March 11, 2019Date of Patent: December 8, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Erroll William Wood, Michael Bleyer, Christopher Douglas Edmonds, Michael Scott Fenton, Mark James Finocchio, John Albert Judnich
-
Patent number: 10855910Abstract: An electronic device according to a first aspect comprises an imaging device and a processor. The imaging device has a predetermined imaging region, and is configured to capture images having different focal distances. The processor is configured to execute a first process of identifying two or more focus identification regions included in a first imaging region included in the predetermined imaging region by comparing the captured images in the first imaging region, and a second process of interpolating a focal distance of a middle region not belonging to the two or more focus identification regions in the first imaging region. The second process includes a process of interpolating the focal distance of the middle region based on a focal distance of an interpolation focus region that is located inside or outside the middle region and that is one of the two or more focus identification regions.Type: GrantFiled: August 26, 2019Date of Patent: December 1, 2020Assignee: KYOCERA CorporationInventors: Masaki Tano, Masayoshi Kondo, Yusuke Suzuki, Masaya Kawakita, Seiji Yamada, Tomohiro Hamaguchi, Koji Saijo
-
Patent number: 10846819Abstract: The present invention includes an apparatus and method for determining time-varying stress experienced by a structure comprising: obtaining images that include the structure; segmenting the second and any subsequent images to include the “static” portions that are identified from the first image; computing with a processor the affine transformations between the first and second, and optionally subsequent images, sequence of images; estimating a deformation (i.e. translation and rotation) undergone by the structure; and converting the deformation to estimate the structural stress by using one or more scaling functions) to generate the time-varying stress experienced by the structure.Type: GrantFiled: April 12, 2018Date of Patent: November 24, 2020Assignee: Southern Methodist UniversityInventors: Dinesh Rajan, Brett Story, Joseph Camp
-
Patent number: 10846935Abstract: This invention relates to information processing systems and methods in a workplace environment. More particularly, the invention relates to systems and methods for displaying information for use by human users in a workplace environment. Such methods and systems may include an augmented reality mobile device application with voice interactive and other features including user-selectable buttons. Such methods and systems provide rich real-time information to the user via composited media content, overlay imagery, and acoustic speakers. Composited media content may include interactive maps, calendaring functions, navigation information, and tools to assist with management of assignment information. The augmented reality methods and systems facilitate access to various locations and resources at given workplace campus.Type: GrantFiled: April 11, 2019Date of Patent: November 24, 2020Assignee: PRO UNLIMITED GLOBAL SOLUTIONS, INC.Inventors: Ted Sergott, Brad Martin
-
Patent number: 10838067Abstract: An object detection system includes a lidar-unit and a controller. The controller defines an occupancy-grid that segregates the field-of-view into columns, determine a first-occupancy-status of a column based on first-cloud-points detected by the lidar-unit in the column by a first-scan, determine a second-occupancy-status of the column based second-cloud-points detected in the column by a second-scan, determine a first-number of the first-cloud-points and a second-number of the second-cloud-points, and determine a dynamic-status of the column only if the column is classified as occupied by either the first-occupancy-status or the second-occupancy-status.Type: GrantFiled: January 17, 2017Date of Patent: November 17, 2020Assignee: Aptiv Technologies LimitedInventors: Izzat H. Izzat, Susan Chen
-
Patent number: 10839261Abstract: An information processing apparatus includes a first obtaining unit configured to obtain a holding position and orientation of a manipulator when holding of a target object is performed and holding success or failure information of the target object in the holding position and orientation, a second obtaining unit configured to obtain an image in which the target object is imaged when the holding of the target object is performed, and a generation unit configured to generate learning data when the holding of the target object by the manipulator is learnt on a basis of the holding position and orientation and the holding success or failure information obtained by the first obtaining unit and the image obtained by the second obtaining unit.Type: GrantFiled: March 21, 2018Date of Patent: November 17, 2020Assignee: CANON KABUSHIKI KAISHAInventors: Daisuke Yamada, Masahiro Suzuki
-
Patent number: 10837795Abstract: Technique for performing camera calibration on a vehicle is disclosed. A method of performing camera calibration includes emitting, by a laser emitter located on a vehicle and pointed towards a road, a first laser pulse group towards a first location on a road and a second laser pulse group towards a second location on the road, where each laser pulse group includes one or more laser spots. For each laser pulse group: a first set of distances are calculated from a location of a laser receiver to the one or more laser spots, and a second set of distances are determined from an image obtained from a camera, where the second set of distances are from a location of the camera to the one or more laser spots. The method also includes determining two camera calibration parameters of the camera by solving two equations.Type: GrantFiled: September 16, 2019Date of Patent: November 17, 2020Assignee: TUSIMPLE, INC.Inventors: Xialing Han, Zehua Huang
-
Patent number: 10832086Abstract: Embodiments of this application disclose a target object rendition method performed by an electronic device having a camera and a display screen. The electronic device photographs an entity card by using a camera. From the photograph, the electronic device obtains a to-be-recognized target object that is printed on the entity card and then image interpretation data of the to-be-recognized target object, the image interpretation data being used to reflect a feature of the to-be-recognized target object. After matching the image interpretation data with prestored image interpretation data, the electric device obtains a prestored target object corresponding to prestored image interpretation data matching the image interpretation data. Finally, the electronic device invokes an application (e.g., a computer game) associated with the prestored target object (e.g., a virtual character of the computer game) and renders, using the application, the prestored target object on the display screen.Type: GrantFiled: August 24, 2018Date of Patent: November 10, 2020Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMIITEDInventor: Ruoxu Yang
-
Patent number: 10832487Abstract: In one implementation, a method of generating a depth map is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes generating, based on a first image and a second image, a first depth map of the second image. The method includes generating, based on the first depth map of the second image and pixel values of the second image, a second depth map of the second image.Type: GrantFiled: September 24, 2019Date of Patent: November 10, 2020Assignee: APPLE INC.Inventors: Daniel Ulbricht, Amit Kumar K C, Angela Blechschmidt, Chen-Yu Lee, Eshan Verma, Mohammad Haris Baig, Tanmay Batra
-
Patent number: 10825159Abstract: A method and apparatus for segmenting an image are provided. The method may include the steps of clustering pixels from one of a plurality of images into one or more segments, determining one or more unstable segments changing by more than a predetermined threshold from a prior of the plurality of images, determining one or more segments transitioning from an unstable to a stable segment, determining depth for one or more of the one or more segments that have changed by more than the predetermined threshold, determining depth for one or more of the one or more transitioning segments, and combining the determined depth for the one or more unstable segments and the one or more transitioning segments with a predetermined depth of all segments changing less than the predetermined threshold from the prior of the plurality of images.Type: GrantFiled: May 30, 2018Date of Patent: November 3, 2020Assignee: Edge 3 Technologies, Inc.Inventors: Tarek El Dokor, Jordan Cluster
-
Patent number: 10818024Abstract: Apparatus and associated methods relate to ranging an object nearby an aircraft by triangulation using two simultaneously-captured images of the object. The two images are simultaneously captured from two distinct vantage points on the aircraft. Because the two images are captured from distinct vantage points, the object can be imaged at different pixel-coordinate locations in the two images. The two images are correlated with one another so as to determine the pixel-coordinate locations corresponding to the object. Range to the object is calculated based on the determined pixel-coordinate locations and the two vantage points from which the two images are captured. Only a subset of each image is used for the correlation. The subset used for correlation includes pixel data from pixels upon which spatially-patterned light that is projected onto the object by a light projector and reflected by the object.Type: GrantFiled: March 26, 2018Date of Patent: October 27, 2020Assignee: Simmonds Precision Products, Inc.Inventors: Robert Rutkiewicz, Todd Anthony Ell, Joseph T. Pesik
-
Patent number: 10802148Abstract: A device for extracting depth information according to one embodiment of the present invention comprises: a light outputting unit for outputting IR (InfraRed) light; a light inputting unit for inputting light reflected from an object after outputting from the light outputting unit; a light adjusting unit for adjusting the angle of the light so as to radiate the light into a first area including the object, and then for adjusting the angle of the light so as to radiate the light into a second area; and a controlling unit for estimating the motion of the object by using at least one of the lights between the light inputted to the first area and the light inputted to the second area.Type: GrantFiled: May 28, 2019Date of Patent: October 13, 2020Assignee: LG INNOTEK CO., LTD.Inventors: Myung Wook Lee, Sung Ki Jung, Gi Seok Lee, Kyung Ha Han, Eun Sung Seo, Se Kyu Lee
-
Patent number: 10789685Abstract: A privacy image generation system may use a light field camera that includes an array of cameras or an RGBZ camera(s)) is used to capture images and display images according to a selected privacy mode. The privacy mode may include a blur background mode that can be automatically selected based on the meeting type, participants, location, and device type. A region of interest and/or an object(s) of interest (e.g. one or more persons in a foreground) is determined and the privacy image generation system is configured to clearly show the region/object of interest and obscure or replace the background by combining multiple images. The displayed image includes the region/object(s) of interest clearly shown (e.g. in focus) and any objects in a background of the combined image shown having a limited depth of field (e.g. blurry/not in focus) and/or blurred due to the combination of the multiple images.Type: GrantFiled: August 24, 2018Date of Patent: September 29, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Ross Cutler, Ramin Mehran
-
Patent number: 10791267Abstract: A service system includes a mobile terminal and an information processing device capable of communication via a network. The mobile terminal includes a first transmission unit that transmits spherical images taken in respective imaging locations and positional information about the imaging locations to the information processing device. The information processing device includes a reception unit that receives the spherical images transmitted by the first transmission unit; a map data obtaining unit that obtains map data from a map data storage, the map data including the imaging locations of the spherical images; a path information creation unit that creates information about a path made by connecting the imaging locations in the map data obtained by the map data obtaining unit; and a content providing unit that makes content available for a request through the network, the content including the map data, the information about the path, and the spherical images.Type: GrantFiled: April 26, 2019Date of Patent: September 29, 2020Assignee: Ricoh Company, Ltd.Inventors: Satoshi Mitsui, Kei Kushimoto, Kohichi Nishide, Tomotoshi Sato
-
Patent number: 10783796Abstract: Embodiments include devices and methods operating a robotic vehicle. A robotic vehicle processor may detect an object posing an imminent risk of collision with the robotic vehicle. The robotic vehicle processor may determine a classification of the detected object. The robotic vehicle processor may manage a rotation of a rotor of the robotic vehicle prior to a collision based on the classification of the object.Type: GrantFiled: September 1, 2017Date of Patent: September 22, 2020Assignee: QUALCOMM IncorporatedInventors: Daniel Warren Mellinger, III, Michael Joshua Shomin, Travis Van Schoyck, Ross Eric Kessler, John Anthony Dougherty, Jonathan Paul Davis, Michael Franco Taveira
-
Patent number: 10775602Abstract: Microscopy method and apparatus for determining the positions of emitter objects in a three-dimensional space that comprises focusing scattered light or fluorescent light emitted by an emitter object, separating the focused beam in a first and a second optical beam, directing the first and the second optical beam through a varifocal lens having an optical axis such that the first optical beam impinges on the lens along the optical axis and the second beam impinges decentralized with respect to the optical axis of the varifocal lens, simultaneously capturing a first image created by the first optical beam and a second image created by the second optical beam, and determining the relative displacement of the position of the object in the first and in the second image, wherein the relative displacement contains the information of the axial position of the object along a perpendicular direction to the image plane.Type: GrantFiled: January 16, 2018Date of Patent: September 15, 2020Assignee: FONDAZIONE INSTITUTO ITALIANO DI TECNOLOGIAInventors: Marti Duocastella, Giuseppe Sancataldo, Alberto Diaspro
-
Patent number: 10778908Abstract: A method for correcting an image of a multi-camera system by using a multi-sphere correction device is disclosed. According to the present invention, the method for correcting an image of a multi-camera system by using a correction unit and a multi-sphere correction device having two or more spheres, which are vertically arranged on a support at certain intervals, comprises: (a) a correction variable acquisition step of determining, by the correction unit, a correction variable value for a geometric error of each camera by using the multi-sphere correction device; and (b) an image correction step of correcting an image obtained by photographing an actual subject by using the correction variable acquired in step (a), and outputting the corrected image, thereby enabling a more accurate image to be captured since a geometric error of each camera is corrected.Type: GrantFiled: September 5, 2016Date of Patent: September 15, 2020Inventor: Christopher Chinho Chang
-
Patent number: 10767976Abstract: Reflected light from the measurement object is received by a plurality of pixel columns arranged in an X2 direction in a light receiving unit 121, and a plurality of light receiving amount distributions is output. One or a plurality of peak candidate positions of light receiving amounts in a Z2 direction is detected by a peak detection unit 1 for each pixel column based on the plurality of light receiving amount distributions. A peak position to be adopted to a profile is selected from the peak candidate positions detected for each pixel column based on a relative positional relationship with a peak position of another pixel column adjacent to the pixel column, and profile data indicating the profile is generated by the profile generation unit 3 based on the selected peak position.Type: GrantFiled: July 16, 2019Date of Patent: September 8, 2020Assignee: KEYENCE CORPORATIONInventor: Yoshitaka Tsuchida
-
Patent number: 10759383Abstract: There is provided a method and a system for authorizing a user device to send a request to a vehicle in order to prevent a physical layer relay attack. The system comprises a vehicle comprising an acoustic transducer and an RF transceiver and a user device comprising an acoustic transducer and an RF transceiver. The method relates to a signaling scheme using a combination of acoustic and RF signals for preventing a successful physical layer relay attack.Type: GrantFiled: May 18, 2018Date of Patent: September 1, 2020Assignee: Volvo Car CorporationInventor: Ulf Björkengren
-
Patent number: 10761194Abstract: An apparatus for distance measurement includes: a memory; and a processor coupled to the memory and configured to execute a detection process that includes detecting a measurement target in a measurement range through two-dimensional scanning of a scan angle range with laser light, and execute a changing process that includes changing a width of the scan angle range with the laser light so that sampling density has a certain value or higher based on a distance and a bearing angle from the apparatus to the measurement target.Type: GrantFiled: March 20, 2017Date of Patent: September 1, 2020Assignee: FUJITSU LIMITEDInventors: Koichi Iida, Koichi Tezuka, Takeshi Morikawa, Satoru Ushijima
-
Patent number: 10762651Abstract: A method for determining a distance to a target object includes transmitting light pulses to illuminate the target object and sensing, in a first region of a light-sensitive pixel array, light provided from an optical feedback device that receives a portion of the transmitted light pulses. The feedback optical device includes a preset reference depth. The method includes calibrating time-of-flight (TOF) depth measurement reference information based on the sensed light in the first region of the pixel array. The method further includes sensing, in a second region of the light-sensitive pixel array, light reflected from the target object from the transmitted light pulses. The distance of the target object is determined based on the sensed reflected light and the calibrated TOF measurement reference information.Type: GrantFiled: September 29, 2017Date of Patent: September 1, 2020Assignee: Magic Leap, Inc.Inventors: David Cohen, Assaf Pellman, Shai Mazor, Erez Tadmor, Giora Yahav
-
Patent number: 10754343Abstract: Methods, systems, and apparatus for receiving a reference to an object located in an environment of a robot, accessing mapping data that indicates, for each of a plurality of object instances, respective probabilities of the object instance being located at one or more locations in the environment, wherein the respective probabilities are based at least on an amount of time that has passed since a prior observation of the object instance was made, identifying one or more particular object instances that correspond to the referenced object, determining, based at least on the mapping data, the respective probabilities of the one or more particular object instances being located at the one or more locations in the environment, selecting, based at least on the respective probabilities, a particular location in the environment where the referenced object is most likely located, and directing the robot to navigate to the particular location.Type: GrantFiled: February 15, 2018Date of Patent: August 25, 2020Assignee: X Development LLCInventors: Jonas Witt, Elmar Mair
-
Patent number: 10740137Abstract: A system having one or more devices connected to a network which redirects data from one or more physical sensors to a network interface, with one or more processors, having an endpoint service executing to receive data collected by the sensor to be sent to the control service, a first virtualized endpoint service associated with the endpoint device executing on the one or more processors including a virtual sensor associated with the first physical sensor causing the element to perform a task that results in a change in the device or device's sensor or the environment surrounding the device, and a first virtual interactive element controller associated with a first interactive element, and a first virtualized endpoint engine, a virtualized endpoint service executing to receive, over the network by the first virtual sensor, first redirected data collected by the physical sensor.Type: GrantFiled: November 23, 2018Date of Patent: August 11, 2020Assignee: Sanctum Solutions, Inc.Inventor: Noel Shepard Stephens
-
Patent number: 10739126Abstract: To provide a three-dimensional coordinate measuring device capable of measuring coordinates over a wide, range with high accuracy and high reliability. A reference stand 10 is provided on an installation surface. A reference camera 110 is fixed to the reference stand 10. A movable camera 120 and a reference member 190 having plurality of markers are rotatably supported on the reference stand 10. The reference camera 110, the movable camera 120, and the reference member 190 are accommodated in a casing 90. The movable camera 120 captures a probe that makes contact with a measurement target and the reference camera 110 captures the plurality of markers of the reference member 190. The coordinates of a point at which the probe makes contact with the measurement target are calculated based on the image data obtained by the capturing.Type: GrantFiled: June 21, 2019Date of Patent: August 11, 2020Assignee: KEYENCE CORPORATIONInventor: Masayasu Ikebuchi