Point Features (e.g., Spatial Coordinate Descriptors) Patents (Class 382/201)
  • Patent number: 11950946
    Abstract: A technique for automating the identifying of a measurement point in cephalometric image analysis is provided. An automatic measurement point recognition method includes a step of detecting, from a cephalometric image 14 acquired from a subject, a plurality of peripheral partial regions 31, 32, 33, 34 for recognizing a target feature point, a step of estimating a candidate position of the feature point in each of the peripheral partial regions 31, 32, 33, 34 by the application of a regression CNN model 10, and a step of determining the position of the feature point in the cephalometric image 14 based on the distribution of the candidate positions estimated. In the step of detecting, for example, the peripheral partial region 32, a classification CNN model 13 trained with a control image 52 is applied.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: April 9, 2024
    Assignee: OSAKA UNIVERSITY
    Inventors: Chihiro Tanikawa, Chonho Lee
  • Patent number: 11916899
    Abstract: Disclosed are systems and methods for managing online identity authentication risk in a nuanced identity system. For example, a method may include receiving a request by a user for a transaction on an electronic platform; determining a risk associated with the requested transaction; determining a current level of assurance associated with the user on the electronic platform; determining that the risk exceeds the current level of assurance; adjusting the current level of assurance such that the adjusted level of assurance exceeds the risk; and executing the requested transaction on the electronic platform after adjusting the current level of assurance.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: February 27, 2024
    Assignee: Yahoo Assets LLC
    Inventors: George Fletcher, Jonathan Hryn, Lovlesh Chhabra, Deepak Nayak
  • Patent number: 11893707
    Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.
    Type: Grant
    Filed: February 2, 2023
    Date of Patent: February 6, 2024
    Assignee: Fyusion, Inc.
    Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
  • Patent number: 11841902
    Abstract: An information processing apparatus includes a first search unit configured to search for a feature of an object extracted from a video image in a registration list in which a feature indicating a predetermined object to be detected and identification information for identifying the predetermined object are registered, a generation unit configured to generate a first list in which at least the ID information about the predetermined object corresponding to the extracted object is registered in a case where the feature of the extracted object is detected in the registration list and generate a second list in which the feature of the extracted object is registered in a case where the feature of the extracted object is not detected in the registration list, and a second search unit configured to search for a target object designated by a user in the first list or the second list.
    Type: Grant
    Filed: August 24, 2021
    Date of Patent: December 12, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Masahiro Matsushita
  • Patent number: 11842519
    Abstract: An image processing apparatus includes an acquisition unit configured to acquire color information about an object and color information about a background in a first image captured by an image capturing apparatus, a storage unit configured to store the acquired color information about the object and the acquired color information about the background, an expansion unit configured to expand, in a three-dimensional color space, the color information about the object and the color information about the background stored in the storage unit, and an extraction unit configured to extract an area of the object from a second image captured by the image capturing apparatus, based on the respective pieces of the color information expanded by the expansion unit.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: December 12, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventor: Kazuki Takemoto
  • Patent number: 11816174
    Abstract: A search system performs item searches using morphed images. Given two or more input images of objects, the search system uses a generative model to generate one or more morphed images. Each morphed image shows an object of the object type of the input images. Additionally, the object in each morphed image combines visual characteristics of the objects in the input images. The morphed images may be presented to a user, who may select a particular morphed image for searching. A search is performed using a morphed image to identify items that are visually similar to the object in the morphed image. Search results for the items identified from the search are returned for presentation.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: November 14, 2023
    Assignee: eBay Inc.
    Inventors: Yi Yu, Jingying Wang, Kaikai Tong, Jie Ren, Yuqi Zhang
  • Patent number: 11763468
    Abstract: The present disclosure describes a computer-implemented method for image landmark detection. The method includes receiving an input image for the image landmark detection, generating a feature map for the input image via a convolutional neural network, initializing an initial graph based on the generated feature map, the initial graph representing initial landmarks of the input image, performing a global graph convolution of the initial graph to generate a global graph, where landmarks in the global graph move closer to target locations associated with the input image, and iteratively performing a local graph convolution of the global graph to generate a series of local graphs, where landmarks in the series of local graphs iteratively move further towards the target locations associated with the input image.
    Type: Grant
    Filed: December 9, 2020
    Date of Patent: September 19, 2023
    Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.
    Inventors: Shun P Miao, Weijian Li, Yuhang Lu, Kang Zheng, Le Lu
  • Patent number: 11734888
    Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
  • Patent number: 11720992
    Abstract: A method, apparatus and computer program product are provided for warping a perspective image into the ground plane using a homography transformation to estimate a bird's eye view in real time. Methods may include: receiving first sensor data from a first vehicle traveling along a road segment in an environment, where the first sensor data includes perspective image data of the environment, and where the first sensor data includes a location and a heading; retrieving a satellite image associated with the location and heading; applying a deep neural network to regress a bird's eye view image from the perspective image data; applying a Generative Adversarial Network (GAN) to the regressed bird's eye view image using the satellite image as a target of the GAN to obtain a stabilized bird's eye view image; and deriving values of a homography matrix between the sensor data and the established bird's eye view image.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: August 8, 2023
    Assignee: HERE GLOBAL B.V.
    Inventor: Anirudh Viswanathan
  • Patent number: 11676288
    Abstract: A data processing device for detecting motion in a sequence of frames each comprising one or more blocks of pixels, includes a sampling unit configured to determine image characteristics at a set of sample points of a block, a feature generation unit configured to form a current feature for the block, the current feature having a plurality of values derived from the sample points, and motion detection logic configured to generate a motion output for a block by comparing the current feature for the block to a learned feature representing historical feature values for the block.
    Type: Grant
    Filed: June 14, 2021
    Date of Patent: June 13, 2023
    Assignee: Imagination Technologies Limited
    Inventor: Timothy Smith
  • Patent number: 11615550
    Abstract: A mapping method is disclosed that includes (i) memorizing at least one reference image of an environment to be mapped containing a specific arrangement of a plurality of markers organized in cells; wherein each cell is identified by a marker or by a combination of markers; the plurality of markers comprising at least a number of different types of markers equal to a type-number; (ii) detecting, by a moving video camera, a video sequence wherein the environment to be mapped is, at least in part, framed; (iii) identifying in at least one frame of the video sequence one or more cells being part of the specific arrangement of markers of the reference image; and (iv) calculating, on the basis of the data regarding the identified cells, at least one homography for the perspective transformation of the coordinates acquired in the video sequence into coordinates in the reference image and vice versa.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: March 28, 2023
    Assignee: SR LABS S.R.L.
    Inventors: Gianluca Dal Lago, Roberto Delfiore
  • Patent number: 11599578
    Abstract: The present disclosure relates to generating a search graph or search index to aid in receiving a search query and identifying results of a dataset based on the search query. For example, systems disclosed herein may generate a navigable search graph including vertices representative of objects or points within a dataset that enables a computing device having access to the search graph to navigate vertices of the graph along an identified path until arriving at a point within the search graph that corresponds to a value associated with the search query. Upon identifying a location within the graph corresponding to the search query, systems disclosed herein may identify a neighborhood of points (e.g., vertices) corresponding to items from the dataset and output a set of results for the search query representative of determined results for the search query.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: March 7, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Harsha Vardhan Simhadri, Ravishankar Krishnaswamy, Suhas Jayaram Subramanya, Devvrit
  • Patent number: 11586324
    Abstract: The present disclosure relates to a technology for assigning group labels during touch sensing, and more particularly to a technology for preventing two adjacent objects from receiving a same group label by searching for a valley when group labels are assigned and this allows labeling without performing any object separation process.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: February 21, 2023
    Assignee: Silicon Works Co., Ltd.
    Inventors: Mohamed Gamal Ahmed Mohamed, Young Ju Park, Sun Young Park
  • Patent number: 11574223
    Abstract: A method for rapid discovery of satellite behavior, applied to a pursuit-evasion system including at least one satellite and a plurality of space sensing assets. The method includes performing transfer learning and zero-shot learning to obtain a semantic layer using space data information. The space data information includes simulated space data based on a physical model. The method further includes obtaining measured space-activity data of the satellite from the space sensing assets; performing manifold learning on the measured space-activity data to obtain measured state-related parameters of the satellite; modeling the state uncertainty and the uncertainty propagation of the satellite based on the measured state-related parameters; and performing game reasoning based on a Markov game model to predict satellite behavior and management of the plurality of space sensing assets according to the semantic layer and the modeled state uncertainty and uncertainty propagation.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: February 7, 2023
    Assignee: INTELLIGENT FUSION TECHNOLOGY, INC.
    Inventors: Dan Shen, Carolyn Sheaff, Jingyang Lu, Genshe Chen, Erik Blasch, Khanh Pham
  • Patent number: 11551338
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 11543532
    Abstract: According to some embodiments, there is provided a method comprising labeling each of a plurality of subjects in an image with a label, the label for each subject indicating a kind of the subject, wherein the labeling comprises analyzing the image and/or distance measurement points for an area depicted in the image, and determining additional distance information not included in the distance measurement points, wherein determining the additional distance information comprises interpolating and/or generating a distance to a point based at least in part on at least some of the distance measurement points, wherein the at least some of the distance measurement points are selected based at least in part on labels assigned to one or more of the plurality of subjects.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: January 3, 2023
    Assignee: SONY CORPORATION
    Inventors: Takuto Motoyama, Shingo Tsurumi
  • Patent number: 11468673
    Abstract: An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: October 11, 2022
    Assignee: Snap Inc.
    Inventors: Mohit Gupta, Shree K. Nayar, Vishwanath Saragadam Raja Venkata
  • Patent number: 11423086
    Abstract: A data processing system performs data processing of raw or preprocessed data. The data includes log files, bitstream data, and other network traffic containing either cookie or device identifiers. In some embodiments, the data processing system includes a connectivity overlay engine comprising a data ingester, a connectivity generator, an event access control system, and a feature vector generation framework.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: August 23, 2022
    Assignee: The Trade Desk, Inc.
    Inventors: Jason Atlas, Fady Kalo, Jiefei Ma
  • Patent number: 11416989
    Abstract: A portable anomaly drug detection device is disclosed. The device includes at least one light source, a detector to scan or process the subject drug, and a control circuit having a controller. The at least one light source, the camera, and the control circuit are disposed within an enclosure. The controller is configured to process and analyze drug images captured by the camera when the light source emits light, and determines whether a drug is counterfeit upon detection of an anomaly within the captured images relative to a trained counterfeit detecting machine-learning model.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: August 16, 2022
    Assignee: PRECISE SOFTWARE SOLUTIONS, INC.
    Inventors: Xin Liu, Ruomin Ba, James Wang, Bin Duan, Xu Yang, Hang Wang
  • Patent number: 11363202
    Abstract: A method for video stabilization may include obtaining a target frame of a video; dividing a plurality of pixels of the target frame into a plurality of pixel groups; determining a plurality of first feature points in the target frame; determining first location information of the plurality of first feature points in the target frame; determining second location information of the plurality of first feature points in a frame prior to the target frame in the video; obtaining a global homography matrix; determining an offset of each of the plurality of first feature points; determining a fitting result based on the first location information and the offsets; for each of the plurality of pixel groups, determining a correction matrix; and for each of the plurality of pixel groups, processing the pixels in the pixel group based on the global homography matrix and the correction matrix.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: June 14, 2022
    Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.
    Inventor: Tingniao Wang
  • Patent number: 11341728
    Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: capturing an image that depicts currency, receiving an identification of an augmented reality experience associated with the image, the server identifying the augmented reality experience by assigning visual words to features of the image and searching a visual search database to identify a plurality of marker images associated with one or more charities, and the server retrieving the augmented reality experience associated with a given one of the plurality of marker images; automatically, in response to capturing the image that depicts currency, displaying one or more graphical elements of the identified augmented reality experience that represent a charity of the one or more charities; and receiving a donation to the charity from a user of the client device in response to an interaction with the one or more graphical elements that represent the charity.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: May 24, 2022
    Assignee: Snap Inc.
    Inventors: Hao Hu, Kevin Sarabia Dela Rosa, Bogdan Maksymchuk, Volodymyr Piven, Ekaterina Simbereva
  • Patent number: 11341736
    Abstract: Methods and apparatus to match images using semantic features are disclosed. An example apparatus includes a semantic labeler to determine a semantic label for each of a first set of points of a first image and each of a second set of points of a second image; a binary robust independent element features (BRIEF) determiner to determine semantic BRIEF descriptors for a first subset of the first set of points and a second subset of the second set of points based on the semantic labels; and a point matcher to match first points of the first subset of points to second points of the second subset of points based on the semantic BRIEF descriptors.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: May 24, 2022
    Assignee: INTEL CORPORATION
    Inventors: Yimin Zhang, Haibing Ren, Wei Hu, Ping Guo
  • Patent number: 11340696
    Abstract: An event driven sensor (EDS) is used for simultaneous localization and mapping (SLAM) and in particular is used in conjunction with a constellation of light emitting diodes (LED) to simultaneously localize all LEDs and track EDS pose in space. The EDS may be stationary or moveable and can track moveable LED constellations as rigid bodies. Each individual LED is distinguished at a high rate using minimal computational resources (no image processing). Thus, instead of a camera and image processing, rapidly pulsing LEDs detected by the EDS are used for feature points such that EDS events are related to only one LED at a time.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: May 24, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sergey Bashkirov
  • Patent number: 11315266
    Abstract: Depth perception has become of increased interest in the image community due to the increasing usage of deep neural networks for the generation of dense depth maps. The applications of depth perception estimation, however, may still be limited due to the needs of a large amount of dense ground-truth depth for training. It is contemplated that a self-supervised control strategy may be developed for estimating depth maps using color images and data provided by a sensor system (e.g., sparse LiDAR data). Such a self-supervised control strategy may leverage superpixels (i.e., group of pixels that share common characteristics, for instance, pixel intensity) as local planar regions to regularize surface normal derivatives from estimated depth together with the photometric loss. The control strategy may be operable to produce a dense depth map that does not require a dense ground-truth supervision.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: April 26, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liang Mi, Liu Ren
  • Patent number: 11288784
    Abstract: An automated method and apparatus are provided for identifying when a first video is a content-identical variant of a second video. The first and second video each include a plurality of image frames, and the image frames of either the first video or the second video include at least one black border. A plurality of variants are generated of selected image frames of the first video and the second video. The variants are then compared to each other, and the first video is identified as being a variant of the second video when at least one match is detected among the variants.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: March 29, 2022
    Assignee: ALPHONSO INC.
    Inventors: Aseem Saxena, Pulak Kuli, Tejas Digambar Deshpande, Manish Gupta
  • Patent number: 11282431
    Abstract: A system and method for updating a display of a display device. The display device may include a plurality of subpixels, and a display driver coupled to the plurality of subpixels. The display driver may be configured to compare a first data signal of a first statistically selected subpixel of the plurality of subpixels to a first statistically selected threshold, increase a value of a first counter corresponding to the first statistically selected subpixel in response to the first subpixel data signal exceeding the first statistically selected threshold, adjust the first subpixel data signal in response to the first counter value satisfying a second threshold, and drive the first statistically selected subpixel based at least in part on the adjusted first subpixel data signal.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: March 22, 2022
    Assignee: Synaptics Incorporated
    Inventors: Damien Berget, Joseph Kurth Reynolds
  • Patent number: 11256909
    Abstract: A method for pushing information based on a user emotion including recordings of behavior habits of the user based on a number of predefined emotions within a predefined time period can be implemented in the disclosed electronic device. Based on each predefined emotion, a proportion of each behavior habit of the user is determined at the predetermined time intervals. The device determines information to be pushed according to a current user emotion and the proportions of the behavior habits of the user corresponding to the current user emotion, and the electronic device is controlled to push the determined information.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: February 22, 2022
    Assignees: HONGFUJIN PRECISION ELECTRONICS (ZHENGZHOU) CO., LTD., HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Lin-Hao Wang, Jun-Wei Zhang, Jun Zhang, Yi-Tao Kao
  • Patent number: 11183056
    Abstract: An electronic device and a method are provided. The electronic device includes: at least one sensing unit configured to acquire position data and image data for each of a plurality of nodes at predetermined intervals while the electronic device is moving; and a processor configured to extract an object from the image data, generate first object data for identifying the extracted object, and store position data of a first node corresponding to the image data from which the object has been extracted and the first object data corresponding to the first node.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: November 23, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mid-eum Choi, A-ron Baik, Chang-soo Park, Jeong-eun Lee
  • Patent number: 11120294
    Abstract: A first set of pixels forming a first set of rows of a first image acquired by an image acquisition device is received. A first partially corrected set of pixels forming a first partially corrected set of rows is generated from the first set of pixels where successive pixels of a row from the first partially corrected set of rows correspond to successive parallel lines in a plane defined by the sheet of light that are equally spaced along a first axis of a world coordinate system. Based on the first partially corrected set of pixels and based on a peak extraction mechanism, a first partially corrected set of points of the sheet of light is extracted. The first partially corrected set of points is transformed to obtain a first corrected set of points of the sheet of light that are corrected in a first and a second direction.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: September 14, 2021
    Assignee: Matrox Electronic Systems Ltd.
    Inventors: Vincent Zalzal, Christopher Hirst, Steve Massicotte, Jean-Sébastien Lemieux
  • Patent number: 11120306
    Abstract: Examples of techniques for adaptive object recognition for a target visual domain given a generic machine learning model are provided. According to one or more embodiments of the present invention, a computer-implemented method for adaptive object recognition for a target visual domain given a generic machine learning model includes creating, by a processing device, an adapted model and identifying classes of the target visual domain using the generic machine learning model. The method further includes creating, by the processing device, a domain-constrained machine learning model based at least in part on the generic machine learning model such that the domain-constrained machine learning model is restricted to recognize only the identified classes of the target visual domain. The method further includes computing, by the processing device, a recognition result based at least in part on combining predictions of the domain-constrained machine learning model and the adapted model.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: September 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nirmit V. Desai, Dawei Li, Theodoros Salonidis
  • Patent number: 11115654
    Abstract: An encoder is disclosed which includes circuitry and memory. Using the memory, the circuitry, in a first operating mode, derives first motion vectors for a first block obtained by splitting a picture, and generates a prediction image corresponding to the first block, with a bi-directional optical flow flag settable to true, and by referring to spatial gradients of luminance generated based on the first motion vectors. Using the memory, the circuitry, in a second operating mode, derives second motion vectors for a sub-block obtained by splitting a second block, the second block being obtained by splitting the picture, and generates a prediction image corresponding to the sub-block, with the bi-directional optical flow flag set to false.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: September 7, 2021
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Kiyofumi Abe, Takahiro Nishi, Tadamasa Toma, Ryuichi Kanoh
  • Patent number: 11106898
    Abstract: Systems and methods allow a data labeler to identify an expression in an image of a labelee's face without being provided with the image. In one aspect, the image of the labelee's face is analyzed to identify facial landmarks. A labeler is selected from a database who has similar facial characteristics to the labelee. A geometric mesh is built of the labeler's face and the geometric mesh is deformed based on the facial landmarks identified from the image of the labelee. The labeler may identify the facial expression or emotion of the geometric mesh.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: August 31, 2021
    Assignee: Buglife, Inc.
    Inventors: Daniel DeCovnick, David Schukin
  • Patent number: 11106936
    Abstract: Point clouds of objects are compared and matched using logical arrays based on the point clouds. The point clouds are azimuth aligned and translation aligned. The point clouds are converted into logical arrays for ease of processing. Then the logical arrays are compared (e.g. using the AND function and counting matches between the two logical arrays). The comparison is done at various quantization levels to determine which quantization level is likely to give the best object comparison result. Then the object comparison is made. More than two objects may be compared and the best match found.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 31, 2021
    Assignee: General Atomics
    Inventor: Zachary Bergen
  • Patent number: 11090810
    Abstract: A robot system includes a robot including a robot arm and a first camera, a second camera installed separately from the robot, and a control device which controls the robot and the second camera. The first camera has already been calibrated in advance, and a first calibration data which is the calibration data between the coordinate system of the robot and the coordinate system of the first camera is known. The control device (i) images a calibration pattern with the first camera to acquire a first pattern image and images a calibration pattern with the second camera to acquire a second pattern image, and (ii) executes calibration process for obtaining a second calibration data which is the calibration data between the coordinate system of the robot and the coordinate system of the second camera using the first pattern image, the second pattern image, and the first calibration data.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: August 17, 2021
    Assignee: Seiko Epson Corporation
    Inventor: Tomoki Harada
  • Patent number: 11004191
    Abstract: An image recognition processor for an industrial device, the image recognition processor implementing, on an integrated circuit thereof, the functions of storing an image data processing algorithm, which has been determined based on prior learning; acquiring image data of an image including a predetermined pattern; and performing recognition processing on the image data based on the image data processing algorithm to output identification information for identifying a recognized pattern.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: May 11, 2021
    Assignee: KABUSHIKI KAISHA YASKAWA DENKI
    Inventor: Masaru Adachi
  • Patent number: 10997233
    Abstract: In some examples, a computing device refines feature information of query text. The device repeatedly determines attention information based at least in part on feature information of the image and the feature information of the query text, and modifies the feature information of the query text based at least in part on the attention information. The device selects at least one of a predetermined plurality of outputs based at least in part on the refined feature information of the query text. In some examples, the device operates a convolutional computational model to determine feature information of the image. The device network computational models (NCMs) to determine feature information of the query and to determine attention information based at least in part on the feature information of the image and the feature information of the query. Examples include a microphone to detect audio corresponding to the query text.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: May 4, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiaodong He, Li Deng, Jianfeng Gao, Alex Smola, Zichao Yang
  • Patent number: 10984224
    Abstract: A face detection method includes scaling an input image to images of various sizes according to certain proportions by means of an image pyramid; passing the resultant images through a first-level network in a sliding window manner to predict face coordinates, face confidences, and face orientations; filtering out the most negative samples by confidence rankings and sending the remaining image patches to a second-level network. Through a second-level network, filtering out non-face samples; applying a regression to obtain more precise position coordinates and providing prediction results of the face orientations. Through an angle arbitration mechanism, combining the prediction results of the preceding two networks to make a final arbitration for a rotation angle of each sample, rotating each of the image patches upright according to the arbitration result made by the angle arbitration mechanism and sending to a third-level network for fine-tuning to predict positions of keypoints.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: April 20, 2021
    Assignee: ZHUHAI EEASY TECHNOLOGY CO., LTD.
    Inventors: Xucheng Yin, Bowen Yang, Chun Yang
  • Patent number: 10909769
    Abstract: The mixed reality based 3D sketching device includes a processor; and a memory connected to the processor. The memory stores program commands that are executable by the processor to periodically tracks a marker pen photographed through a camera, to determine whether to remove a third point using a distance between a first point corresponding to a reference point, among points that are sequentially tracked, and a second point at a current time, a preset constant, and an angle between the first point, the second point, and the previously identified third point, to search an object model corresponding to a 3D sketch that has been corrected, after correction is completed depending on the removal of the third point, and to display the searched object model on a screen.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: February 2, 2021
    Assignee: INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITY
    Inventors: Soo Mi Choi, Jun Han Kim, Je Wan Han
  • Patent number: 10896493
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 19, 2021
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10783658
    Abstract: A method of processing an image including pixels distributed in cells and in blocks is disclosed, the method including the steps of: a) for each cell, generating n first intensity values of gradients having different orientations, each first value being a weighted sum of the values of the pixels of the cell; b) for each cell, determining a main gradient orientation of the cell and a second value representative of the intensity of the gradient in the main orientation; c) for each block, generating a descriptor of n values respectively corresponding, for each of the n gradient orientations, to the sum of the second values of the cells of the block having the gradient orientation considered as the main gradient orientation.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: September 22, 2020
    Assignee: COMMISSARIAT À L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Camille Dupoiron, William Guicquero, Gilles Sicard, Arnaud Verdant
  • Patent number: 10765864
    Abstract: The present disclosure provides a method and device for filtering sensor data. Signals from an array of sensor pixels are received and checked for changes in pixel values. Motion is detected based on the changes in pixel values, and motion output signals are transmitted to a processing station. If the sum of correlated changes in pixel values across a predetermined field of view exceeds a predetermined value, indicating sensor jitter, the motion output signals are suppressed. If a sum of motion values within a defined subsection of the field of view exceeds a predetermined threshold, indicating the presence of a large object of no interest, the motion output signals are suppressed for that subsection.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 8, 2020
    Assignee: National Technology & Engineering Solutions of Sandia, LLC
    Inventors: Frances S. Chance, Christina E. Warrender
  • Patent number: 10753881
    Abstract: Systems and methods suitable for capable of autonomous crack detection in surfaces by analyzing video of the surface. The systems and methods include the capability to produce a video of the surfaces, the capability to analyze individual frames of the video to obtain surface texture feature data for areas of the surfaces depicted in each of the individual frames, the capability to analyze the surface texture feature data to detect surface texture features in the areas of the surfaces depicted in each of the individual frames, the capability of tracking the motion of the detected surface texture features in the individual frames to produce tracking data, and the capability of using the tracking data to filter non-crack surface texture features from the detected surface texture features in the individual frames.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: August 25, 2020
    Assignee: Purdue Research Foundation
    Inventors: Mohammad Reza Jahanshahi, Fu-Chen Chen
  • Patent number: 10728543
    Abstract: An encoder includes memory and circuitry. Using the memory, the circuitry: in a first operating mode, derives a first motion vector in a unit of a prediction block obtained by splitting an image included in a video, and performs, in the unit of the prediction block, a first motion compensation process that generates a prediction image by referring to a spatial gradient of luminance in an image generated by performing motion compensation using the first motion vector derived; and in a second operating mode, derives a second motion vector in a unit of a sub-block obtained by splitting the prediction block, and performs, in the unit of the sub-block, a second motion compensation process that generates a prediction image without referring to a spatial gradient of luminance in an image generated by performing motion compensation using the second motion vector derived.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: July 28, 2020
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Kiyofumi Abe, Takahiro Nishi, Tadamasa Toma, Ryuichi Kanoh
  • Patent number: 10628702
    Abstract: A template image of a form document is processed to identify alignment regions used to match a query image to the template image and processing regions to identify areas from which information is to be extracted. Processing of the template image includes the identification of a set of meaningful template vectors. A query image is processed to determine meaningful query vectors. The meaningful template vectors are compared with the meaningful query vectors to determine whether the format of the template and query images match. Upon achievement of a match, a homography between the images is determined. In the event a homography threshold has been met, information is extracted from the query image.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: April 21, 2020
    Inventors: Candice R. Gerstner, Katelyn J. Meixner
  • Patent number: 10510145
    Abstract: A medical image comparison method is firstly to obtain plural images of the same body at different time points. Then, obtain a first feature point group by detecting feature points in the first image captured at a first time point, and a second feature point group by identifying feature points in the second image captured at a second time point. An overlapping image information is generated by aligning the second image with the first image according to the first and second feature point groups. Then, window areas corresponding to a first matching image and a second matching image of the overlapping image information are extracted one by one by sliding a window mask, and an image difference ratio for each of the window areas is calculated. In addition, a medical image comparison system is also provided.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: December 17, 2019
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Jian-Ren Chen, Guan-An Chen, Su-Chen Huang, Yue-Min Jiang
  • Patent number: 10496694
    Abstract: For an augmented reality (AR) content creation system having a marker database, when a user requests this system to use a first sub-image of an image to update the marker database, this system computes a suitability score of the first sub-image for rating feature richness of the first sub-image and uniqueness thereof against existing markers in the marker database. When the suitability score is less than a threshold value, a second sub-image of the image having a suitability score not less than the threshold value and completely containing the first sub-image is searched. Then the second sub-image, the suitability score thereof and the suitability score of the first sub-image are substantially-immediately presented to the user for real-time suggesting the user to use the second sub-image instead of the first sub-image as a new marker in updating the marker database to increase feature richness or uniqueness of the new marker.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: December 3, 2019
    Assignee: Hong Kong Applied Science and Technology Research Institute Company Limited
    Inventors: Kar-Wing Edward Lor, King Wai Chow, Laifa Fang
  • Patent number: 10460432
    Abstract: A system for strain testing employs an ink aspiration system adapted to apply a stochastic ink pattern. The stochastic pattern is applied to a test article and a test fixture receives the test article. A digital image correlation (DIC) imaging and calculation system is positioned relative to the test article to image the stochastic ink pattern.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: October 29, 2019
    Assignee: The Boeing Company
    Inventor: Antonio L. Boscia
  • Patent number: 10460156
    Abstract: Various aspects of an image-processing apparatus and method to track and retain an articulated object in a sequence of image frames are disclosed. The image-processing apparatus is configured to segment each image frame in the sequence of image frames into different segmented regions that corresponds to different super-pixels. An articulated object in a first motion state is detected by non-zero temporal derivatives between a first image frame and a second image frame. A first connectivity graph of a first set of super-pixels of the first image frame, is constructed. A second connectivity graph of a second set of super-pixels of the second image frame, is further constructed. A complete object mask of the articulated object in a second motion state is generated based on the first connectivity graph and the second connectivity graph, where at least a portion of the articulated object is stationary in the second motion state.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: October 29, 2019
    Assignee: SONY CORPORATION
    Inventor: Daniel Usikov
  • Patent number: 10215858
    Abstract: Examples relating to the detection of rigid shaped objects are described herein. An example method may involve a computing system determining a first point cloud representation of an environment at a first time using a depth sensor positioned within the environment. The computing system may also determine a second point cloud representation of the environment at a second time using the depth sensor. This way, the computing system may detect a change in position of a rigid shape between a first position in the first point cloud representation and a second position in the second point cloud representation. Based on the detected change in position of the rigid shape, the computing system may determine that the rigid shape is representative of an object in the environment and store information corresponding to the object.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Greg Joseph Klein, Arshan Poursohi, Sumit Jain, Daniel Aden
  • Patent number: 10192287
    Abstract: An image processing method is adapted to process images captured by at least two cameras in an image system. In an embodiment, the image processing method comprises: matching two corresponding feature points for two images, respectively, to become a feature point set; selecting at least five most suitable feature point sets, by using an iterative algorithm; calculating a most suitable radial distortion homography between the two images, according to the at least five most suitable feature point sets; and fusing the images captured by the at least two cameras at each of timing sequences, by using the most suitable radial distortion homography.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 29, 2019
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Sheng-Wei Chan, Che-Tsung Lin