Point Features (e.g., Spatial Coordinate Descriptors) Patents (Class 382/201)
  • Patent number: 11551338
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 11543532
    Abstract: According to some embodiments, there is provided a method comprising labeling each of a plurality of subjects in an image with a label, the label for each subject indicating a kind of the subject, wherein the labeling comprises analyzing the image and/or distance measurement points for an area depicted in the image, and determining additional distance information not included in the distance measurement points, wherein determining the additional distance information comprises interpolating and/or generating a distance to a point based at least in part on at least some of the distance measurement points, wherein the at least some of the distance measurement points are selected based at least in part on labels assigned to one or more of the plurality of subjects.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: January 3, 2023
    Assignee: SONY CORPORATION
    Inventors: Takuto Motoyama, Shingo Tsurumi
  • Patent number: 11468673
    Abstract: An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: October 11, 2022
    Assignee: Snap Inc.
    Inventors: Mohit Gupta, Shree K. Nayar, Vishwanath Saragadam Raja Venkata
  • Patent number: 11423086
    Abstract: A data processing system performs data processing of raw or preprocessed data. The data includes log files, bitstream data, and other network traffic containing either cookie or device identifiers. In some embodiments, the data processing system includes a connectivity overlay engine comprising a data ingester, a connectivity generator, an event access control system, and a feature vector generation framework.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: August 23, 2022
    Assignee: The Trade Desk, Inc.
    Inventors: Jason Atlas, Fady Kalo, Jiefei Ma
  • Patent number: 11416989
    Abstract: A portable anomaly drug detection device is disclosed. The device includes at least one light source, a detector to scan or process the subject drug, and a control circuit having a controller. The at least one light source, the camera, and the control circuit are disposed within an enclosure. The controller is configured to process and analyze drug images captured by the camera when the light source emits light, and determines whether a drug is counterfeit upon detection of an anomaly within the captured images relative to a trained counterfeit detecting machine-learning model.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: August 16, 2022
    Assignee: PRECISE SOFTWARE SOLUTIONS, INC.
    Inventors: Xin Liu, Ruomin Ba, James Wang, Bin Duan, Xu Yang, Hang Wang
  • Patent number: 11363202
    Abstract: A method for video stabilization may include obtaining a target frame of a video; dividing a plurality of pixels of the target frame into a plurality of pixel groups; determining a plurality of first feature points in the target frame; determining first location information of the plurality of first feature points in the target frame; determining second location information of the plurality of first feature points in a frame prior to the target frame in the video; obtaining a global homography matrix; determining an offset of each of the plurality of first feature points; determining a fitting result based on the first location information and the offsets; for each of the plurality of pixel groups, determining a correction matrix; and for each of the plurality of pixel groups, processing the pixels in the pixel group based on the global homography matrix and the correction matrix.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: June 14, 2022
    Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.
    Inventor: Tingniao Wang
  • Patent number: 11341728
    Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: capturing an image that depicts currency, receiving an identification of an augmented reality experience associated with the image, the server identifying the augmented reality experience by assigning visual words to features of the image and searching a visual search database to identify a plurality of marker images associated with one or more charities, and the server retrieving the augmented reality experience associated with a given one of the plurality of marker images; automatically, in response to capturing the image that depicts currency, displaying one or more graphical elements of the identified augmented reality experience that represent a charity of the one or more charities; and receiving a donation to the charity from a user of the client device in response to an interaction with the one or more graphical elements that represent the charity.
    Type: Grant
    Filed: October 13, 2020
    Date of Patent: May 24, 2022
    Assignee: Snap Inc.
    Inventors: Hao Hu, Kevin Sarabia Dela Rosa, Bogdan Maksymchuk, Volodymyr Piven, Ekaterina Simbereva
  • Patent number: 11340696
    Abstract: An event driven sensor (EDS) is used for simultaneous localization and mapping (SLAM) and in particular is used in conjunction with a constellation of light emitting diodes (LED) to simultaneously localize all LEDs and track EDS pose in space. The EDS may be stationary or moveable and can track moveable LED constellations as rigid bodies. Each individual LED is distinguished at a high rate using minimal computational resources (no image processing). Thus, instead of a camera and image processing, rapidly pulsing LEDs detected by the EDS are used for feature points such that EDS events are related to only one LED at a time.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: May 24, 2022
    Assignee: Sony Interactive Entertainment Inc.
    Inventor: Sergey Bashkirov
  • Patent number: 11341736
    Abstract: Methods and apparatus to match images using semantic features are disclosed. An example apparatus includes a semantic labeler to determine a semantic label for each of a first set of points of a first image and each of a second set of points of a second image; a binary robust independent element features (BRIEF) determiner to determine semantic BRIEF descriptors for a first subset of the first set of points and a second subset of the second set of points based on the semantic labels; and a point matcher to match first points of the first subset of points to second points of the second subset of points based on the semantic BRIEF descriptors.
    Type: Grant
    Filed: March 1, 2018
    Date of Patent: May 24, 2022
    Assignee: INTEL CORPORATION
    Inventors: Yimin Zhang, Haibing Ren, Wei Hu, Ping Guo
  • Patent number: 11315266
    Abstract: Depth perception has become of increased interest in the image community due to the increasing usage of deep neural networks for the generation of dense depth maps. The applications of depth perception estimation, however, may still be limited due to the needs of a large amount of dense ground-truth depth for training. It is contemplated that a self-supervised control strategy may be developed for estimating depth maps using color images and data provided by a sensor system (e.g., sparse LiDAR data). Such a self-supervised control strategy may leverage superpixels (i.e., group of pixels that share common characteristics, for instance, pixel intensity) as local planar regions to regularize surface normal derivatives from estimated depth together with the photometric loss. The control strategy may be operable to produce a dense depth map that does not require a dense ground-truth supervision.
    Type: Grant
    Filed: December 16, 2019
    Date of Patent: April 26, 2022
    Assignee: Robert Bosch GmbH
    Inventors: Zhixin Yan, Liang Mi, Liu Ren
  • Patent number: 11288784
    Abstract: An automated method and apparatus are provided for identifying when a first video is a content-identical variant of a second video. The first and second video each include a plurality of image frames, and the image frames of either the first video or the second video include at least one black border. A plurality of variants are generated of selected image frames of the first video and the second video. The variants are then compared to each other, and the first video is identified as being a variant of the second video when at least one match is detected among the variants.
    Type: Grant
    Filed: September 16, 2021
    Date of Patent: March 29, 2022
    Assignee: ALPHONSO INC.
    Inventors: Aseem Saxena, Pulak Kuli, Tejas Digambar Deshpande, Manish Gupta
  • Patent number: 11282431
    Abstract: A system and method for updating a display of a display device. The display device may include a plurality of subpixels, and a display driver coupled to the plurality of subpixels. The display driver may be configured to compare a first data signal of a first statistically selected subpixel of the plurality of subpixels to a first statistically selected threshold, increase a value of a first counter corresponding to the first statistically selected subpixel in response to the first subpixel data signal exceeding the first statistically selected threshold, adjust the first subpixel data signal in response to the first counter value satisfying a second threshold, and drive the first statistically selected subpixel based at least in part on the adjusted first subpixel data signal.
    Type: Grant
    Filed: October 21, 2019
    Date of Patent: March 22, 2022
    Assignee: Synaptics Incorporated
    Inventors: Damien Berget, Joseph Kurth Reynolds
  • Patent number: 11256909
    Abstract: A method for pushing information based on a user emotion including recordings of behavior habits of the user based on a number of predefined emotions within a predefined time period can be implemented in the disclosed electronic device. Based on each predefined emotion, a proportion of each behavior habit of the user is determined at the predetermined time intervals. The device determines information to be pushed according to a current user emotion and the proportions of the behavior habits of the user corresponding to the current user emotion, and the electronic device is controlled to push the determined information.
    Type: Grant
    Filed: January 22, 2020
    Date of Patent: February 22, 2022
    Assignees: HONGFUJIN PRECISION ELECTRONICS (ZHENGZHOU) CO., LTD., HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: Lin-Hao Wang, Jun-Wei Zhang, Jun Zhang, Yi-Tao Kao
  • Patent number: 11183056
    Abstract: An electronic device and a method are provided. The electronic device includes: at least one sensing unit configured to acquire position data and image data for each of a plurality of nodes at predetermined intervals while the electronic device is moving; and a processor configured to extract an object from the image data, generate first object data for identifying the extracted object, and store position data of a first node corresponding to the image data from which the object has been extracted and the first object data corresponding to the first node.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: November 23, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Mid-eum Choi, A-ron Baik, Chang-soo Park, Jeong-eun Lee
  • Patent number: 11120306
    Abstract: Examples of techniques for adaptive object recognition for a target visual domain given a generic machine learning model are provided. According to one or more embodiments of the present invention, a computer-implemented method for adaptive object recognition for a target visual domain given a generic machine learning model includes creating, by a processing device, an adapted model and identifying classes of the target visual domain using the generic machine learning model. The method further includes creating, by the processing device, a domain-constrained machine learning model based at least in part on the generic machine learning model such that the domain-constrained machine learning model is restricted to recognize only the identified classes of the target visual domain. The method further includes computing, by the processing device, a recognition result based at least in part on combining predictions of the domain-constrained machine learning model and the adapted model.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: September 14, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Nirmit V. Desai, Dawei Li, Theodoros Salonidis
  • Patent number: 11120294
    Abstract: A first set of pixels forming a first set of rows of a first image acquired by an image acquisition device is received. A first partially corrected set of pixels forming a first partially corrected set of rows is generated from the first set of pixels where successive pixels of a row from the first partially corrected set of rows correspond to successive parallel lines in a plane defined by the sheet of light that are equally spaced along a first axis of a world coordinate system. Based on the first partially corrected set of pixels and based on a peak extraction mechanism, a first partially corrected set of points of the sheet of light is extracted. The first partially corrected set of points is transformed to obtain a first corrected set of points of the sheet of light that are corrected in a first and a second direction.
    Type: Grant
    Filed: August 8, 2019
    Date of Patent: September 14, 2021
    Assignee: Matrox Electronic Systems Ltd.
    Inventors: Vincent Zalzal, Christopher Hirst, Steve Massicotte, Jean-Sébastien Lemieux
  • Patent number: 11115654
    Abstract: An encoder is disclosed which includes circuitry and memory. Using the memory, the circuitry, in a first operating mode, derives first motion vectors for a first block obtained by splitting a picture, and generates a prediction image corresponding to the first block, with a bi-directional optical flow flag settable to true, and by referring to spatial gradients of luminance generated based on the first motion vectors. Using the memory, the circuitry, in a second operating mode, derives second motion vectors for a sub-block obtained by splitting a second block, the second block being obtained by splitting the picture, and generates a prediction image corresponding to the sub-block, with the bi-directional optical flow flag set to false.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: September 7, 2021
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Kiyofumi Abe, Takahiro Nishi, Tadamasa Toma, Ryuichi Kanoh
  • Patent number: 11106936
    Abstract: Point clouds of objects are compared and matched using logical arrays based on the point clouds. The point clouds are azimuth aligned and translation aligned. The point clouds are converted into logical arrays for ease of processing. Then the logical arrays are compared (e.g. using the AND function and counting matches between the two logical arrays). The comparison is done at various quantization levels to determine which quantization level is likely to give the best object comparison result. Then the object comparison is made. More than two objects may be compared and the best match found.
    Type: Grant
    Filed: September 5, 2019
    Date of Patent: August 31, 2021
    Assignee: General Atomics
    Inventor: Zachary Bergen
  • Patent number: 11106898
    Abstract: Systems and methods allow a data labeler to identify an expression in an image of a labelee's face without being provided with the image. In one aspect, the image of the labelee's face is analyzed to identify facial landmarks. A labeler is selected from a database who has similar facial characteristics to the labelee. A geometric mesh is built of the labeler's face and the geometric mesh is deformed based on the facial landmarks identified from the image of the labelee. The labeler may identify the facial expression or emotion of the geometric mesh.
    Type: Grant
    Filed: March 19, 2019
    Date of Patent: August 31, 2021
    Assignee: Buglife, Inc.
    Inventors: Daniel DeCovnick, David Schukin
  • Patent number: 11090810
    Abstract: A robot system includes a robot including a robot arm and a first camera, a second camera installed separately from the robot, and a control device which controls the robot and the second camera. The first camera has already been calibrated in advance, and a first calibration data which is the calibration data between the coordinate system of the robot and the coordinate system of the first camera is known. The control device (i) images a calibration pattern with the first camera to acquire a first pattern image and images a calibration pattern with the second camera to acquire a second pattern image, and (ii) executes calibration process for obtaining a second calibration data which is the calibration data between the coordinate system of the robot and the coordinate system of the second camera using the first pattern image, the second pattern image, and the first calibration data.
    Type: Grant
    Filed: October 10, 2018
    Date of Patent: August 17, 2021
    Assignee: Seiko Epson Corporation
    Inventor: Tomoki Harada
  • Patent number: 11004191
    Abstract: An image recognition processor for an industrial device, the image recognition processor implementing, on an integrated circuit thereof, the functions of storing an image data processing algorithm, which has been determined based on prior learning; acquiring image data of an image including a predetermined pattern; and performing recognition processing on the image data based on the image data processing algorithm to output identification information for identifying a recognized pattern.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: May 11, 2021
    Assignee: KABUSHIKI KAISHA YASKAWA DENKI
    Inventor: Masaru Adachi
  • Patent number: 10997233
    Abstract: In some examples, a computing device refines feature information of query text. The device repeatedly determines attention information based at least in part on feature information of the image and the feature information of the query text, and modifies the feature information of the query text based at least in part on the attention information. The device selects at least one of a predetermined plurality of outputs based at least in part on the refined feature information of the query text. In some examples, the device operates a convolutional computational model to determine feature information of the image. The device network computational models (NCMs) to determine feature information of the query and to determine attention information based at least in part on the feature information of the image and the feature information of the query. Examples include a microphone to detect audio corresponding to the query text.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: May 4, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xiaodong He, Li Deng, Jianfeng Gao, Alex Smola, Zichao Yang
  • Patent number: 10984224
    Abstract: A face detection method includes scaling an input image to images of various sizes according to certain proportions by means of an image pyramid; passing the resultant images through a first-level network in a sliding window manner to predict face coordinates, face confidences, and face orientations; filtering out the most negative samples by confidence rankings and sending the remaining image patches to a second-level network. Through a second-level network, filtering out non-face samples; applying a regression to obtain more precise position coordinates and providing prediction results of the face orientations. Through an angle arbitration mechanism, combining the prediction results of the preceding two networks to make a final arbitration for a rotation angle of each sample, rotating each of the image patches upright according to the arbitration result made by the angle arbitration mechanism and sending to a third-level network for fine-tuning to predict positions of keypoints.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: April 20, 2021
    Assignee: ZHUHAI EEASY TECHNOLOGY CO., LTD.
    Inventors: Xucheng Yin, Bowen Yang, Chun Yang
  • Patent number: 10909769
    Abstract: The mixed reality based 3D sketching device includes a processor; and a memory connected to the processor. The memory stores program commands that are executable by the processor to periodically tracks a marker pen photographed through a camera, to determine whether to remove a third point using a distance between a first point corresponding to a reference point, among points that are sequentially tracked, and a second point at a current time, a preset constant, and an angle between the first point, the second point, and the previously identified third point, to search an object model corresponding to a 3D sketch that has been corrected, after correction is completed depending on the removal of the third point, and to display the searched object model on a screen.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: February 2, 2021
    Assignee: INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITY
    Inventors: Soo Mi Choi, Jun Han Kim, Je Wan Han
  • Patent number: 10896493
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: January 19, 2021
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10783658
    Abstract: A method of processing an image including pixels distributed in cells and in blocks is disclosed, the method including the steps of: a) for each cell, generating n first intensity values of gradients having different orientations, each first value being a weighted sum of the values of the pixels of the cell; b) for each cell, determining a main gradient orientation of the cell and a second value representative of the intensity of the gradient in the main orientation; c) for each block, generating a descriptor of n values respectively corresponding, for each of the n gradient orientations, to the sum of the second values of the cells of the block having the gradient orientation considered as the main gradient orientation.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: September 22, 2020
    Assignee: COMMISSARIAT À L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Camille Dupoiron, William Guicquero, Gilles Sicard, Arnaud Verdant
  • Patent number: 10765864
    Abstract: The present disclosure provides a method and device for filtering sensor data. Signals from an array of sensor pixels are received and checked for changes in pixel values. Motion is detected based on the changes in pixel values, and motion output signals are transmitted to a processing station. If the sum of correlated changes in pixel values across a predetermined field of view exceeds a predetermined value, indicating sensor jitter, the motion output signals are suppressed. If a sum of motion values within a defined subsection of the field of view exceeds a predetermined threshold, indicating the presence of a large object of no interest, the motion output signals are suppressed for that subsection.
    Type: Grant
    Filed: August 9, 2018
    Date of Patent: September 8, 2020
    Assignee: National Technology & Engineering Solutions of Sandia, LLC
    Inventors: Frances S. Chance, Christina E. Warrender
  • Patent number: 10753881
    Abstract: Systems and methods suitable for capable of autonomous crack detection in surfaces by analyzing video of the surface. The systems and methods include the capability to produce a video of the surfaces, the capability to analyze individual frames of the video to obtain surface texture feature data for areas of the surfaces depicted in each of the individual frames, the capability to analyze the surface texture feature data to detect surface texture features in the areas of the surfaces depicted in each of the individual frames, the capability of tracking the motion of the detected surface texture features in the individual frames to produce tracking data, and the capability of using the tracking data to filter non-crack surface texture features from the detected surface texture features in the individual frames.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: August 25, 2020
    Assignee: Purdue Research Foundation
    Inventors: Mohammad Reza Jahanshahi, Fu-Chen Chen
  • Patent number: 10728543
    Abstract: An encoder includes memory and circuitry. Using the memory, the circuitry: in a first operating mode, derives a first motion vector in a unit of a prediction block obtained by splitting an image included in a video, and performs, in the unit of the prediction block, a first motion compensation process that generates a prediction image by referring to a spatial gradient of luminance in an image generated by performing motion compensation using the first motion vector derived; and in a second operating mode, derives a second motion vector in a unit of a sub-block obtained by splitting the prediction block, and performs, in the unit of the sub-block, a second motion compensation process that generates a prediction image without referring to a spatial gradient of luminance in an image generated by performing motion compensation using the second motion vector derived.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: July 28, 2020
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Kiyofumi Abe, Takahiro Nishi, Tadamasa Toma, Ryuichi Kanoh
  • Patent number: 10628702
    Abstract: A template image of a form document is processed to identify alignment regions used to match a query image to the template image and processing regions to identify areas from which information is to be extracted. Processing of the template image includes the identification of a set of meaningful template vectors. A query image is processed to determine meaningful query vectors. The meaningful template vectors are compared with the meaningful query vectors to determine whether the format of the template and query images match. Upon achievement of a match, a homography between the images is determined. In the event a homography threshold has been met, information is extracted from the query image.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: April 21, 2020
    Inventors: Candice R. Gerstner, Katelyn J. Meixner
  • Patent number: 10510145
    Abstract: A medical image comparison method is firstly to obtain plural images of the same body at different time points. Then, obtain a first feature point group by detecting feature points in the first image captured at a first time point, and a second feature point group by identifying feature points in the second image captured at a second time point. An overlapping image information is generated by aligning the second image with the first image according to the first and second feature point groups. Then, window areas corresponding to a first matching image and a second matching image of the overlapping image information are extracted one by one by sliding a window mask, and an image difference ratio for each of the window areas is calculated. In addition, a medical image comparison system is also provided.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: December 17, 2019
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Jian-Ren Chen, Guan-An Chen, Su-Chen Huang, Yue-Min Jiang
  • Patent number: 10496694
    Abstract: For an augmented reality (AR) content creation system having a marker database, when a user requests this system to use a first sub-image of an image to update the marker database, this system computes a suitability score of the first sub-image for rating feature richness of the first sub-image and uniqueness thereof against existing markers in the marker database. When the suitability score is less than a threshold value, a second sub-image of the image having a suitability score not less than the threshold value and completely containing the first sub-image is searched. Then the second sub-image, the suitability score thereof and the suitability score of the first sub-image are substantially-immediately presented to the user for real-time suggesting the user to use the second sub-image instead of the first sub-image as a new marker in updating the marker database to increase feature richness or uniqueness of the new marker.
    Type: Grant
    Filed: March 21, 2016
    Date of Patent: December 3, 2019
    Assignee: Hong Kong Applied Science and Technology Research Institute Company Limited
    Inventors: Kar-Wing Edward Lor, King Wai Chow, Laifa Fang
  • Patent number: 10460156
    Abstract: Various aspects of an image-processing apparatus and method to track and retain an articulated object in a sequence of image frames are disclosed. The image-processing apparatus is configured to segment each image frame in the sequence of image frames into different segmented regions that corresponds to different super-pixels. An articulated object in a first motion state is detected by non-zero temporal derivatives between a first image frame and a second image frame. A first connectivity graph of a first set of super-pixels of the first image frame, is constructed. A second connectivity graph of a second set of super-pixels of the second image frame, is further constructed. A complete object mask of the articulated object in a second motion state is generated based on the first connectivity graph and the second connectivity graph, where at least a portion of the articulated object is stationary in the second motion state.
    Type: Grant
    Filed: March 6, 2018
    Date of Patent: October 29, 2019
    Assignee: SONY CORPORATION
    Inventor: Daniel Usikov
  • Patent number: 10460432
    Abstract: A system for strain testing employs an ink aspiration system adapted to apply a stochastic ink pattern. The stochastic pattern is applied to a test article and a test fixture receives the test article. A digital image correlation (DIC) imaging and calculation system is positioned relative to the test article to image the stochastic ink pattern.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: October 29, 2019
    Assignee: The Boeing Company
    Inventor: Antonio L. Boscia
  • Patent number: 10215858
    Abstract: Examples relating to the detection of rigid shaped objects are described herein. An example method may involve a computing system determining a first point cloud representation of an environment at a first time using a depth sensor positioned within the environment. The computing system may also determine a second point cloud representation of the environment at a second time using the depth sensor. This way, the computing system may detect a change in position of a rigid shape between a first position in the first point cloud representation and a second position in the second point cloud representation. Based on the detected change in position of the rigid shape, the computing system may determine that the rigid shape is representative of an object in the environment and store information corresponding to the object.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: February 26, 2019
    Assignee: Google LLC
    Inventors: Greg Joseph Klein, Arshan Poursohi, Sumit Jain, Daniel Aden
  • Patent number: 10192287
    Abstract: An image processing method is adapted to process images captured by at least two cameras in an image system. In an embodiment, the image processing method comprises: matching two corresponding feature points for two images, respectively, to become a feature point set; selecting at least five most suitable feature point sets, by using an iterative algorithm; calculating a most suitable radial distortion homography between the two images, according to the at least five most suitable feature point sets; and fusing the images captured by the at least two cameras at each of timing sequences, by using the most suitable radial distortion homography.
    Type: Grant
    Filed: December 22, 2016
    Date of Patent: January 29, 2019
    Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Sheng-Wei Chan, Che-Tsung Lin
  • Patent number: 10181088
    Abstract: A system and method for performing foreground/background separation on an input image. The method identifies a corresponding model visual element in a scene model associated with the input image, the model visual element being associated with a set of element models, each element model including a plurality of visual data sets. Select an element model from the set of element models, dependent upon a visual distance between the input visual element and a visual data set of the selected element model satisfying a predetermined criterion. The method classifies the input visual element as one of foreground and background, dependent upon the selected element model, and then updates each visual data set in the selected element model, dependent upon the input visual element and at least first and second different methods of updating a visual data set.
    Type: Grant
    Filed: October 20, 2011
    Date of Patent: January 15, 2019
    Assignee: Canon Kabushiki Kaisha
    Inventors: Amit Kumar Gupta, Ashley Partis, Xin Yu Liu
  • Patent number: 10157372
    Abstract: Devices, systems and methods are disclosed that detect and interpret the meaning of visual indicators (such as LEDs) of devices, and provide information and solutions to potential malfunctions to users. For example, image data of the visual indicators can be captured by an image capture device (such as a camera). The image data may then be analyzed to identify an object captured in the image data. Luminescence of pixel values in the image data corresponding to a visual indicator of the object may be used to determine a sequence of the visual indicator. Then, information corresponding to the meaning of the sequence may be sent to a device of a user or owner of the object.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: December 18, 2018
    Assignee: Amazon Technologies, Inc.
    Inventors: Ilya Vladimirovich Brailovskiy, Paul Berenberg
  • Patent number: 10140549
    Abstract: Various embodiments may increase scalability of image representations stored in a database for use in image matching and retrieval. For example, a system providing image matching can obtain images of a number of inventory items, extract features from each image using a feature extraction algorithm, and transform the same into their feature descriptor representations. These feature descriptor representations can be subsequently stored and used to compare against query images submitted by users. Though the size of each feature descriptor representation isn't particularly large, the total number of these descriptors requires a substantial amount of storage space. Accordingly, feature descriptor representations are compressed to minimize storage and, in one example, machine learning can be used to compensate for information lost as a result of the compression.
    Type: Grant
    Filed: February 27, 2017
    Date of Patent: November 27, 2018
    Assignee: A9.COM, INC.
    Inventors: Simant Dube, Sunil Ramesh, Xiaofan Lin, Arnab Sanat Kumar Dhua, Colin Jon Taylor, Jaishanker K. Pillai
  • Patent number: 10102421
    Abstract: A method for face recognition in the video comprises: performing feature extraction on a target face in multiple image frames in the video to generate multiple face feature vectors respectively corresponding to the target face in the multiple image frames; performing time sequence feature extraction on the plurality of face feature vectors to convert the plurality of face feature vectors into a feature vector of a predetermined dimension; and judging the feature vector of the predetermined dimension by using a classifier so as to recognize the target face.
    Type: Grant
    Filed: November 1, 2016
    Date of Patent: October 16, 2018
    Assignee: PINHOLE (BEIJING) TECHNOLOGY CO., LTD.
    Inventors: Erjin Zhou, Qi Yin
  • Patent number: 10095952
    Abstract: The invention provides a method and device for acquiring clothing image attribute points, which can increase an efficiency of acquiring the clothing image attribute points. The method comprises: saving five attribute points manually calibrated and determining first to eleventh points in sequence by taking a horizontal direction as a horizontal coordinate and a vertical direction as a vertical coordinate, for a first side of two bilaterally symmetric sides of the clothing image. By means of a technical solution of an embodiment of the present invention, for the clothing image, five points are manually calibrated first; and then the other eleven attribute points can be determined by a computer, so that a manual workload is reduced, and the efficiency of acquiring the clothing image attribute points is increased.
    Type: Grant
    Filed: June 11, 2015
    Date of Patent: October 9, 2018
    Assignees: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY CO., LTD., BEIJING JINGDONG CENTURY TRADING CO., LTD.
    Inventor: Gang Zhao
  • Patent number: 10089601
    Abstract: A solution for inventory identification and quantification using a portable computing device (“PCD”) comprising a camera subsystem is described. An exemplary embodiment of the solution comprises a method that begins with capturing a video stream of a physical inventory comprised of a plurality of individual inventory items. Using a set of tracking points appearing in sequential frames, and optical flow calculations, coordinates for global centers of the frames may be calculated. From there, coordinates for identified inventory items may be determined relative to the global centers of the frames within which they are captured. Comparing the calculated coordinates for inventory items identified in each frame, as well as fingerprint data, embodiments of the method may identify and filter duplicate image captures of the same inventory item within some statistical certainty. Symbology data, such as QR codes, are decoded and quantified as part of the inventory count.
    Type: Grant
    Filed: May 12, 2017
    Date of Patent: October 2, 2018
    Inventors: Jed Wasilewsky, Chase Campbell, Kelly Storm
  • Patent number: 10089417
    Abstract: Structure boundaries may be determined by receiving a plurality of three dimensional (3D) data points representing a geographic area. The 3D data points may be projected into a two dimensional (2D) grid comprised of area elements. A structure boundary may be determined based on an analysis of the area elements.
    Type: Grant
    Filed: September 26, 2013
    Date of Patent: October 2, 2018
    Assignee: HERE Global B.V.
    Inventors: Xi Zhang, Xin Chen
  • Patent number: 10068128
    Abstract: The present disclosure pertains to the field of image processing technologies and discloses a face key point positioning method and a terminal. The method includes: obtaining a face image; recognizing a face frame in the face image; determining positions of n key points of a target face in the face frame according to the face frame and a first positioning algorithm; performing screening to select, from candidate faces, a similar face whose positions of corresponding key points match the positions of the n key points of the target face; and determining positions of m key points of the similar face selected through screening according to a second positioning algorithm, m being a positive integer. In this way, the problem that positions of key points obtained by a terminal have relatively great deviations in the related technologies is resolved, thereby achieving an effect of improving accuracy of positioned positions of the key points.
    Type: Grant
    Filed: February 21, 2017
    Date of Patent: September 4, 2018
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventors: Chengjie Wang, Jilin Li, Feiyue Huang, Yongjian Wu
  • Patent number: 10013619
    Abstract: The invention relates to method of detecting elliptical structures (10) in an image (9), comprising: detecting circular arc-shaped structures (11) in the image (9) using a circle Hough transform (CHT) of the image (9), wherein a radius and a center point (12) are determined for each circular arc-shaped structure (11), identifying pairs of circular arc-shaped structures (11) consisting of two of the detected circular arc-shaped structures (11) with substantially equal radii, defining, for each one of these pairs, a search area (14) within the image (9) depending on the center points (12) of the respective pair of circular arc-shaped structures (11), searching in the search area (14) defined for any given pair of circular arc-shaped structures (11), for a pair of edges (16) connecting these two circular arc-shaped structures (11). The invention further relates to a device for detecting elliptical structures in an image.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: July 3, 2018
    Assignee: MANDO CORPORATION
    Inventor: Jochen Abhau
  • Patent number: 9959454
    Abstract: A face recognition device includes a processor configured to: extract a plurality of feature points of a face included in an input image; detect a first and a second feature points that are paired from among the plurality of the feature points, a third feature point that is away from a straight line that connects the first and the second feature points, and two inter-feature vectors starting from the third feature point to the respective first the second feature points; calculate a feature angle formed by the two detected inter-feature vectors; and perform face recognition based on the feature angle formed by the two inter-feature vectors included in face information that is previously set as the face targeted for recognition and based on the calculated feature angle.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: May 1, 2018
    Assignee: FUJITSU LIMITED
    Inventor: Toshiro Ohbitsu
  • Patent number: 9953238
    Abstract: The invention relates to an image processing method for automatically extracting distorted circular image elements (109; 601) from an image (101), comprising the steps of determining, by means of a circle detection (102), at least one circular image section (103; 301, 301?) as a first approximation of at least one distorted circular image element (109; 601), then determining, via a statistical color analysis of the at least one determined circular image section (301, 301?), whether the circular image section (301, 301?) belongs to one of one or more predefined color classes and, in case thereof, assigning the circular image section to the respective color class, and, finally, determining an exact shape (109; 601) of the at least one distorted circular element by starting with at least one of the one or more determined circular image sections assigned to one of the one or more color classes and varying the image section to maximize a value of a color-class specific function of the image section.
    Type: Grant
    Filed: March 25, 2016
    Date of Patent: April 24, 2018
    Assignee: MANDO CORPORATION
    Inventor: Jochen Abhau
  • Patent number: 9934304
    Abstract: Systems and methods for optimizing memory in an interest-driven business intelligence system in accordance with embodiments of the invention are illustrated. A dictionary for storing values of a dataset may be partitioned in accordance with some embodiments. The partitions of the dictionary may be generated by mapping and reducer processes. The mapping processes receive a value, determine the dimension of the data to which the value belongs, and provides the value to a reducer process that handles the determined dimension. Each reducer process generates partitions of the dictionary for each dimension. The number of values in each partition is determined and compared to a threshold value. Partitions that have a number of values greater than the threshold are stored in a common memory. Partitions smaller than the threshold value can be combined with other partitions such that the cardinality of the combined partition exceeds the threshold value.
    Type: Grant
    Filed: August 18, 2015
    Date of Patent: April 3, 2018
    Assignee: Workday, Inc.
    Inventors: Kevin Beyer, Mayank Pradhan, Vignesh Sukumar
  • Patent number: 9858501
    Abstract: A reliability acquiring apparatus includes a section that stores information on a classifier that outputs part likelihood concerning a predetermined part of a detection target when applied to an image, a section that calculates, concerning an image region included in an input image, the part likelihood on the basis of the information on the classifier, a section that determines, on the basis of the calculated part likelihood, a position of the predetermined part in the input image, a section that stores information on a reference position of the predetermined part, a section that calculates, on the basis of the information on the reference position, difference information between the reference position of the predetermined part and the determined position of the predetermined part, and a section that calculates, on the basis of the difference information, reliability indicating possibility that the input image is an image of the detection target.
    Type: Grant
    Filed: February 8, 2013
    Date of Patent: January 2, 2018
    Assignee: NEC CORPORATION
    Inventor: Yusuke Morishita
  • Patent number: 9838572
    Abstract: The method includes for each current pair of first and second successive video images determining movement between the two images. The determining includes a phase of testing homography model hypotheses on the movement by a RANSAC type algorithm operating on a set of points in the first image and first assumed corresponding points in the second image so as to deliver one of the homography model hypothesis that defines the movement. The test phase includes a test of first homography model hypotheses of the movement obtained from a set of second points in the first image and second assumed corresponding points in the second image. At least one second homography model hypothesis is obtained from auxiliary information supplied by an inertial sensor and representative of a movement of the image sensor between the captures of the two successive images of the pair.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: December 5, 2017
    Assignee: STMICROELECTRONICS SA
    Inventors: Manu Alibay, Stéphane Auberger