Point Features (e.g., Spatial Coordinate Descriptors) Patents (Class 382/201)
-
Patent number: 12198299Abstract: A method is provided for enhancing video images in a medical device. The method includes receiving a first image frame and a second image frame from an image sensor. First image sub-blocks are generated by dividing the first image frame. Second image sub-blocks are generated by dividing the second image frame based on the first image sub-blocks. Histogram data of the first image sub-blocks is generated. Histogram data of the second image sub-blocks is generated based on the histogram data of the first image sub-blocks. A histogram enhanced image frame is generated based on the histogram data of the second image sub-blocks. A video image stream is generated based on the histogram enhanced image frame.Type: GrantFiled: June 20, 2023Date of Patent: January 14, 2025Assignee: Boston Scientific Scimed, Inc.Inventors: George Wilfred Duval, Kirsten Viering, Louis J. Barbato, Gang Hu
-
Patent number: 12165293Abstract: An apparatus for processing image data may include: a memory storing instructions; and a processor configured to execute the instructions to: extract a target image patch including a target object, from a captured image; obtain a plurality of landmark features from the target image patch; align the plurality of landmark features of the target image patch with a plurality of reference landmark features in a template image patch including the same target object; and when the plurality of landmark features are aligned with the plurality of reference landmark features, transfer texture details of the target object in the template image patch to the target object in the target image patch.Type: GrantFiled: September 28, 2021Date of Patent: December 10, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kaimo Lin, Hamid Sheikh
-
Patent number: 12142079Abstract: A feature conversion learning device is configured to acquire a first image, reduce the first image to a second image having lower resolution than the first image, enlarge the second image to a third image having the same resolution as the first image, extract a first feature that is a feature of the first image and a second feature, convert the second feature into a third feature, and learn a feature conversion method based on a result of comparing the first feature with the third feature.Type: GrantFiled: March 18, 2021Date of Patent: November 12, 2024Assignees: NEC CORPORATION, THE UNIVERSITY OF ELECTRO-COMMUNICATIONSInventors: Masatsugu Ichino, Daisuke Uenoyama, Tsubasa Boura, Takahiro Toizumi, Masato Tsukada, Yuka Ogino
-
Patent number: 12108123Abstract: An electronic device includes a memory, at least one camera module, and a processor operatively connected to the memory and the at least one camera module. The processor is configured to, while acquiring first recording data by using the at least one camera module, detect at least one designated gesture input on the basis of at least a part of the acquired first recording data. The processor executes a function corresponding to the detected gesture input. The processor creates and stores a second recording data in the memory. The second recording data includes data remaining after excluding, from the acquired first recording data, recording data corresponding to a time interval from the detection initiation time point of the detected gesture input to detection termination time point of same.Type: GrantFiled: August 26, 2022Date of Patent: October 1, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Bona Lee, Seonhwa Kim, Heekyung Moon, Youngil Oh, Hyemi Yu, Jonghyun Han
-
Patent number: 12061252Abstract: Some embodiments include a method of generating an environment reference model for positioning comprising: receiving multiple data sets representing a scanned environment including information about a type of sensor used and data for determining an absolute position of objects or feature points represented by the data sets; extracting one or more objects or feature points from each data set; determining a position of each object or feature point in a reference coordinate system; generating a three-dimensional vector representation of the scanned environment aligned with the reference coordinate system including representation of the objects or feature points at corresponding locations, creating links between the objects or feature points in the three dimensional vector model with an identified type of sensor by which they can be detected in the environment; and storing the three-dimensional vector model representation and the links in a retrievable manner.Type: GrantFiled: November 2, 2022Date of Patent: August 13, 2024Assignee: Continental Automotive GmbHInventors: Christian Thiel, Paul Barnard, Bingtao Gao
-
Patent number: 12050994Abstract: Systems and methods are provided for automatically detecting a change in a feature. For example, a system includes a memory and a processor configured to analyze a change associated with a feature over a period of time using a plurality of remotely sensed time series images. Upon execution, the system would receive a plurality of remotely sensed time series images, extract a feature from the plurality of remotely sensed time series images, generate at least two time series feature vectors based on the feature, where the at least two time series feature vectors correspond to the feature at two different times, create a neural network model configured to predict a change in the feature at a specified time, and determine, using the neural network model, the change in the feature at a specified time based on a change between the at least two time series feature vectors.Type: GrantFiled: November 15, 2021Date of Patent: July 30, 2024Assignee: Cape Analytics, Inc.Inventors: Ingo Kossyk, Suat Gedikli
-
Patent number: 11980491Abstract: A technique for automating the identifying of a measurement point in cephalometric image analysis is provided. An automatic measurement point recognition method includes a step of detecting, from a cephalometric image 14 acquired from a subject, a plurality of peripheral partial regions 31, 32, 33, 34 for recognizing a target feature point, a step of estimating a candidate position of the feature point in each of the peripheral partial regions 31, 32, 33, 34 by the application of a regression CNN model 10, and a step of determining the position of the feature point in the cephalometric image 14 based on the distribution of the candidate positions estimated. In the step of detecting, for example, the peripheral partial region 32, a classification CNN model 13 trained with a control image 52 is applied.Type: GrantFiled: September 24, 2019Date of Patent: May 14, 2024Assignee: OSAKA UNIVERSITYInventors: Chihiro Tanikawa, Chonho Lee
-
Patent number: 11966452Abstract: Systems and methods for image-based perception. The methods comprise: capturing images by a plurality of cameras with overlapping fields of view; generating, by a computing device, spatial feature maps indicating locations of features in the images; identifying, by the computing device, overlapping portions of the spatial feature maps; generating, by the computing device, at least one combined spatial feature map by combining the overlapping portions of the spatial feature maps together; and/or using, by the computing device, the at least one combined spatial feature map to define a predicted cuboid for at least one object in the images.Type: GrantFiled: August 5, 2021Date of Patent: April 23, 2024Assignee: Ford Global Technologies, LLCInventors: Guy Hotson, Francesco Ferroni, Harpreet Banvait, Kiwoo Shin, Nicolas Cebron
-
Patent number: 11950946Abstract: A technique for automating the identifying of a measurement point in cephalometric image analysis is provided. An automatic measurement point recognition method includes a step of detecting, from a cephalometric image 14 acquired from a subject, a plurality of peripheral partial regions 31, 32, 33, 34 for recognizing a target feature point, a step of estimating a candidate position of the feature point in each of the peripheral partial regions 31, 32, 33, 34 by the application of a regression CNN model 10, and a step of determining the position of the feature point in the cephalometric image 14 based on the distribution of the candidate positions estimated. In the step of detecting, for example, the peripheral partial region 32, a classification CNN model 13 trained with a control image 52 is applied.Type: GrantFiled: September 24, 2019Date of Patent: April 9, 2024Assignee: OSAKA UNIVERSITYInventors: Chihiro Tanikawa, Chonho Lee
-
Patent number: 11916899Abstract: Disclosed are systems and methods for managing online identity authentication risk in a nuanced identity system. For example, a method may include receiving a request by a user for a transaction on an electronic platform; determining a risk associated with the requested transaction; determining a current level of assurance associated with the user on the electronic platform; determining that the risk exceeds the current level of assurance; adjusting the current level of assurance such that the adjusted level of assurance exceeds the risk; and executing the requested transaction on the electronic platform after adjusting the current level of assurance.Type: GrantFiled: September 27, 2019Date of Patent: February 27, 2024Assignee: Yahoo Assets LLCInventors: George Fletcher, Jonathan Hryn, Lovlesh Chhabra, Deepak Nayak
-
Patent number: 11893707Abstract: Images of an undercarriage of a vehicle may be captured via one or more cameras. A point cloud may be determined based on the images. The point cloud may includes points positioned in a virtual three-dimensional space. A stitched image may be determined based on the point cloud by projecting the point cloud onto a virtual camera view. The stitched image may be stored on a storage device.Type: GrantFiled: February 2, 2023Date of Patent: February 6, 2024Assignee: Fyusion, Inc.Inventors: Stefan Johannes Josef Holzer, Krunal Ketan Chande, Matteo Munaro, Pavel Hanchar, Aidas Liaudanskas, Wook Yeon Hwang, Johan Nordin, Milos Vlaski, Martin Markus Hubert Wawro, Nick Stetco, Martin Saelzle
-
Patent number: 11841902Abstract: An information processing apparatus includes a first search unit configured to search for a feature of an object extracted from a video image in a registration list in which a feature indicating a predetermined object to be detected and identification information for identifying the predetermined object are registered, a generation unit configured to generate a first list in which at least the ID information about the predetermined object corresponding to the extracted object is registered in a case where the feature of the extracted object is detected in the registration list and generate a second list in which the feature of the extracted object is registered in a case where the feature of the extracted object is not detected in the registration list, and a second search unit configured to search for a target object designated by a user in the first list or the second list.Type: GrantFiled: August 24, 2021Date of Patent: December 12, 2023Assignee: Canon Kabushiki KaishaInventor: Masahiro Matsushita
-
Patent number: 11842519Abstract: An image processing apparatus includes an acquisition unit configured to acquire color information about an object and color information about a background in a first image captured by an image capturing apparatus, a storage unit configured to store the acquired color information about the object and the acquired color information about the background, an expansion unit configured to expand, in a three-dimensional color space, the color information about the object and the color information about the background stored in the storage unit, and an extraction unit configured to extract an area of the object from a second image captured by the image capturing apparatus, based on the respective pieces of the color information expanded by the expansion unit.Type: GrantFiled: January 13, 2022Date of Patent: December 12, 2023Assignee: Canon Kabushiki KaishaInventor: Kazuki Takemoto
-
Patent number: 11816174Abstract: A search system performs item searches using morphed images. Given two or more input images of objects, the search system uses a generative model to generate one or more morphed images. Each morphed image shows an object of the object type of the input images. Additionally, the object in each morphed image combines visual characteristics of the objects in the input images. The morphed images may be presented to a user, who may select a particular morphed image for searching. A search is performed using a morphed image to identify items that are visually similar to the object in the morphed image. Search results for the items identified from the search are returned for presentation.Type: GrantFiled: March 29, 2022Date of Patent: November 14, 2023Assignee: eBay Inc.Inventors: Yi Yu, Jingying Wang, Kaikai Tong, Jie Ren, Yuqi Zhang
-
Patent number: 11763468Abstract: The present disclosure describes a computer-implemented method for image landmark detection. The method includes receiving an input image for the image landmark detection, generating a feature map for the input image via a convolutional neural network, initializing an initial graph based on the generated feature map, the initial graph representing initial landmarks of the input image, performing a global graph convolution of the initial graph to generate a global graph, where landmarks in the global graph move closer to target locations associated with the input image, and iteratively performing a local graph convolution of the global graph to generate a series of local graphs, where landmarks in the series of local graphs iteratively move further towards the target locations associated with the input image.Type: GrantFiled: December 9, 2020Date of Patent: September 19, 2023Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.Inventors: Shun P Miao, Weijian Li, Yuhang Lu, Kang Zheng, Le Lu
-
Patent number: 11734888Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.Type: GrantFiled: August 6, 2021Date of Patent: August 22, 2023Assignee: Meta Platforms Technologies, LLCInventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
-
Patent number: 11720992Abstract: A method, apparatus and computer program product are provided for warping a perspective image into the ground plane using a homography transformation to estimate a bird's eye view in real time. Methods may include: receiving first sensor data from a first vehicle traveling along a road segment in an environment, where the first sensor data includes perspective image data of the environment, and where the first sensor data includes a location and a heading; retrieving a satellite image associated with the location and heading; applying a deep neural network to regress a bird's eye view image from the perspective image data; applying a Generative Adversarial Network (GAN) to the regressed bird's eye view image using the satellite image as a target of the GAN to obtain a stabilized bird's eye view image; and deriving values of a homography matrix between the sensor data and the established bird's eye view image.Type: GrantFiled: April 23, 2021Date of Patent: August 8, 2023Assignee: HERE GLOBAL B.V.Inventor: Anirudh Viswanathan
-
Patent number: 11676288Abstract: A data processing device for detecting motion in a sequence of frames each comprising one or more blocks of pixels, includes a sampling unit configured to determine image characteristics at a set of sample points of a block, a feature generation unit configured to form a current feature for the block, the current feature having a plurality of values derived from the sample points, and motion detection logic configured to generate a motion output for a block by comparing the current feature for the block to a learned feature representing historical feature values for the block.Type: GrantFiled: June 14, 2021Date of Patent: June 13, 2023Assignee: Imagination Technologies LimitedInventor: Timothy Smith
-
Patent number: 11615550Abstract: A mapping method is disclosed that includes (i) memorizing at least one reference image of an environment to be mapped containing a specific arrangement of a plurality of markers organized in cells; wherein each cell is identified by a marker or by a combination of markers; the plurality of markers comprising at least a number of different types of markers equal to a type-number; (ii) detecting, by a moving video camera, a video sequence wherein the environment to be mapped is, at least in part, framed; (iii) identifying in at least one frame of the video sequence one or more cells being part of the specific arrangement of markers of the reference image; and (iv) calculating, on the basis of the data regarding the identified cells, at least one homography for the perspective transformation of the coordinates acquired in the video sequence into coordinates in the reference image and vice versa.Type: GrantFiled: December 21, 2018Date of Patent: March 28, 2023Assignee: SR LABS S.R.L.Inventors: Gianluca Dal Lago, Roberto Delfiore
-
Patent number: 11599578Abstract: The present disclosure relates to generating a search graph or search index to aid in receiving a search query and identifying results of a dataset based on the search query. For example, systems disclosed herein may generate a navigable search graph including vertices representative of objects or points within a dataset that enables a computing device having access to the search graph to navigate vertices of the graph along an identified path until arriving at a point within the search graph that corresponds to a value associated with the search query. Upon identifying a location within the graph corresponding to the search query, systems disclosed herein may identify a neighborhood of points (e.g., vertices) corresponding to items from the dataset and output a set of results for the search query representative of determined results for the search query.Type: GrantFiled: September 25, 2019Date of Patent: March 7, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Harsha Vardhan Simhadri, Ravishankar Krishnaswamy, Suhas Jayaram Subramanya, Devvrit
-
Patent number: 11586324Abstract: The present disclosure relates to a technology for assigning group labels during touch sensing, and more particularly to a technology for preventing two adjacent objects from receiving a same group label by searching for a valley when group labels are assigned and this allows labeling without performing any object separation process.Type: GrantFiled: August 17, 2021Date of Patent: February 21, 2023Assignee: Silicon Works Co., Ltd.Inventors: Mohamed Gamal Ahmed Mohamed, Young Ju Park, Sun Young Park
-
Patent number: 11574223Abstract: A method for rapid discovery of satellite behavior, applied to a pursuit-evasion system including at least one satellite and a plurality of space sensing assets. The method includes performing transfer learning and zero-shot learning to obtain a semantic layer using space data information. The space data information includes simulated space data based on a physical model. The method further includes obtaining measured space-activity data of the satellite from the space sensing assets; performing manifold learning on the measured space-activity data to obtain measured state-related parameters of the satellite; modeling the state uncertainty and the uncertainty propagation of the satellite based on the measured state-related parameters; and performing game reasoning based on a Markov game model to predict satellite behavior and management of the plurality of space sensing assets according to the semantic layer and the modeled state uncertainty and uncertainty propagation.Type: GrantFiled: October 7, 2019Date of Patent: February 7, 2023Assignee: INTELLIGENT FUSION TECHNOLOGY, INC.Inventors: Dan Shen, Carolyn Sheaff, Jingyang Lu, Genshe Chen, Erik Blasch, Khanh Pham
-
Patent number: 11551338Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.Type: GrantFiled: November 23, 2020Date of Patent: January 10, 2023Assignee: Adobe Inc.Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
-
Patent number: 11543532Abstract: According to some embodiments, there is provided a method comprising labeling each of a plurality of subjects in an image with a label, the label for each subject indicating a kind of the subject, wherein the labeling comprises analyzing the image and/or distance measurement points for an area depicted in the image, and determining additional distance information not included in the distance measurement points, wherein determining the additional distance information comprises interpolating and/or generating a distance to a point based at least in part on at least some of the distance measurement points, wherein the at least some of the distance measurement points are selected based at least in part on labels assigned to one or more of the plurality of subjects.Type: GrantFiled: July 30, 2018Date of Patent: January 3, 2023Assignee: SONY CORPORATIONInventors: Takuto Motoyama, Shingo Tsurumi
-
Patent number: 11468673Abstract: An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.Type: GrantFiled: January 6, 2021Date of Patent: October 11, 2022Assignee: Snap Inc.Inventors: Mohit Gupta, Shree K. Nayar, Vishwanath Saragadam Raja Venkata
-
Patent number: 11423086Abstract: A data processing system performs data processing of raw or preprocessed data. The data includes log files, bitstream data, and other network traffic containing either cookie or device identifiers. In some embodiments, the data processing system includes a connectivity overlay engine comprising a data ingester, a connectivity generator, an event access control system, and a feature vector generation framework.Type: GrantFiled: June 22, 2020Date of Patent: August 23, 2022Assignee: The Trade Desk, Inc.Inventors: Jason Atlas, Fady Kalo, Jiefei Ma
-
Patent number: 11416989Abstract: A portable anomaly drug detection device is disclosed. The device includes at least one light source, a detector to scan or process the subject drug, and a control circuit having a controller. The at least one light source, the camera, and the control circuit are disposed within an enclosure. The controller is configured to process and analyze drug images captured by the camera when the light source emits light, and determines whether a drug is counterfeit upon detection of an anomaly within the captured images relative to a trained counterfeit detecting machine-learning model.Type: GrantFiled: July 30, 2020Date of Patent: August 16, 2022Assignee: PRECISE SOFTWARE SOLUTIONS, INC.Inventors: Xin Liu, Ruomin Ba, James Wang, Bin Duan, Xu Yang, Hang Wang
-
Patent number: 11363202Abstract: A method for video stabilization may include obtaining a target frame of a video; dividing a plurality of pixels of the target frame into a plurality of pixel groups; determining a plurality of first feature points in the target frame; determining first location information of the plurality of first feature points in the target frame; determining second location information of the plurality of first feature points in a frame prior to the target frame in the video; obtaining a global homography matrix; determining an offset of each of the plurality of first feature points; determining a fitting result based on the first location information and the offsets; for each of the plurality of pixel groups, determining a correction matrix; and for each of the plurality of pixel groups, processing the pixels in the pixel group based on the global homography matrix and the correction matrix.Type: GrantFiled: December 21, 2021Date of Patent: June 14, 2022Assignee: ZHEJIANG DAHUA TECHNOLOGY CO., LTD.Inventor: Tingniao Wang
-
Patent number: 11341728Abstract: Aspects of the present disclosure involve a system and a method for performing operations comprising: capturing an image that depicts currency, receiving an identification of an augmented reality experience associated with the image, the server identifying the augmented reality experience by assigning visual words to features of the image and searching a visual search database to identify a plurality of marker images associated with one or more charities, and the server retrieving the augmented reality experience associated with a given one of the plurality of marker images; automatically, in response to capturing the image that depicts currency, displaying one or more graphical elements of the identified augmented reality experience that represent a charity of the one or more charities; and receiving a donation to the charity from a user of the client device in response to an interaction with the one or more graphical elements that represent the charity.Type: GrantFiled: October 13, 2020Date of Patent: May 24, 2022Assignee: Snap Inc.Inventors: Hao Hu, Kevin Sarabia Dela Rosa, Bogdan Maksymchuk, Volodymyr Piven, Ekaterina Simbereva
-
Patent number: 11340696Abstract: An event driven sensor (EDS) is used for simultaneous localization and mapping (SLAM) and in particular is used in conjunction with a constellation of light emitting diodes (LED) to simultaneously localize all LEDs and track EDS pose in space. The EDS may be stationary or moveable and can track moveable LED constellations as rigid bodies. Each individual LED is distinguished at a high rate using minimal computational resources (no image processing). Thus, instead of a camera and image processing, rapidly pulsing LEDs detected by the EDS are used for feature points such that EDS events are related to only one LED at a time.Type: GrantFiled: January 13, 2020Date of Patent: May 24, 2022Assignee: Sony Interactive Entertainment Inc.Inventor: Sergey Bashkirov
-
Patent number: 11341736Abstract: Methods and apparatus to match images using semantic features are disclosed. An example apparatus includes a semantic labeler to determine a semantic label for each of a first set of points of a first image and each of a second set of points of a second image; a binary robust independent element features (BRIEF) determiner to determine semantic BRIEF descriptors for a first subset of the first set of points and a second subset of the second set of points based on the semantic labels; and a point matcher to match first points of the first subset of points to second points of the second subset of points based on the semantic BRIEF descriptors.Type: GrantFiled: March 1, 2018Date of Patent: May 24, 2022Assignee: INTEL CORPORATIONInventors: Yimin Zhang, Haibing Ren, Wei Hu, Ping Guo
-
Patent number: 11315266Abstract: Depth perception has become of increased interest in the image community due to the increasing usage of deep neural networks for the generation of dense depth maps. The applications of depth perception estimation, however, may still be limited due to the needs of a large amount of dense ground-truth depth for training. It is contemplated that a self-supervised control strategy may be developed for estimating depth maps using color images and data provided by a sensor system (e.g., sparse LiDAR data). Such a self-supervised control strategy may leverage superpixels (i.e., group of pixels that share common characteristics, for instance, pixel intensity) as local planar regions to regularize surface normal derivatives from estimated depth together with the photometric loss. The control strategy may be operable to produce a dense depth map that does not require a dense ground-truth supervision.Type: GrantFiled: December 16, 2019Date of Patent: April 26, 2022Assignee: Robert Bosch GmbHInventors: Zhixin Yan, Liang Mi, Liu Ren
-
Patent number: 11288784Abstract: An automated method and apparatus are provided for identifying when a first video is a content-identical variant of a second video. The first and second video each include a plurality of image frames, and the image frames of either the first video or the second video include at least one black border. A plurality of variants are generated of selected image frames of the first video and the second video. The variants are then compared to each other, and the first video is identified as being a variant of the second video when at least one match is detected among the variants.Type: GrantFiled: September 16, 2021Date of Patent: March 29, 2022Assignee: ALPHONSO INC.Inventors: Aseem Saxena, Pulak Kuli, Tejas Digambar Deshpande, Manish Gupta
-
Patent number: 11282431Abstract: A system and method for updating a display of a display device. The display device may include a plurality of subpixels, and a display driver coupled to the plurality of subpixels. The display driver may be configured to compare a first data signal of a first statistically selected subpixel of the plurality of subpixels to a first statistically selected threshold, increase a value of a first counter corresponding to the first statistically selected subpixel in response to the first subpixel data signal exceeding the first statistically selected threshold, adjust the first subpixel data signal in response to the first counter value satisfying a second threshold, and drive the first statistically selected subpixel based at least in part on the adjusted first subpixel data signal.Type: GrantFiled: October 21, 2019Date of Patent: March 22, 2022Assignee: Synaptics IncorporatedInventors: Damien Berget, Joseph Kurth Reynolds
-
Patent number: 11256909Abstract: A method for pushing information based on a user emotion including recordings of behavior habits of the user based on a number of predefined emotions within a predefined time period can be implemented in the disclosed electronic device. Based on each predefined emotion, a proportion of each behavior habit of the user is determined at the predetermined time intervals. The device determines information to be pushed according to a current user emotion and the proportions of the behavior habits of the user corresponding to the current user emotion, and the electronic device is controlled to push the determined information.Type: GrantFiled: January 22, 2020Date of Patent: February 22, 2022Assignees: HONGFUJIN PRECISION ELECTRONICS (ZHENGZHOU) CO., LTD., HON HAI PRECISION INDUSTRY CO., LTD.Inventors: Lin-Hao Wang, Jun-Wei Zhang, Jun Zhang, Yi-Tao Kao
-
Patent number: 11183056Abstract: An electronic device and a method are provided. The electronic device includes: at least one sensing unit configured to acquire position data and image data for each of a plurality of nodes at predetermined intervals while the electronic device is moving; and a processor configured to extract an object from the image data, generate first object data for identifying the extracted object, and store position data of a first node corresponding to the image data from which the object has been extracted and the first object data corresponding to the first node.Type: GrantFiled: February 5, 2018Date of Patent: November 23, 2021Assignee: Samsung Electronics Co., Ltd.Inventors: Mid-eum Choi, A-ron Baik, Chang-soo Park, Jeong-eun Lee
-
Patent number: 11120294Abstract: A first set of pixels forming a first set of rows of a first image acquired by an image acquisition device is received. A first partially corrected set of pixels forming a first partially corrected set of rows is generated from the first set of pixels where successive pixels of a row from the first partially corrected set of rows correspond to successive parallel lines in a plane defined by the sheet of light that are equally spaced along a first axis of a world coordinate system. Based on the first partially corrected set of pixels and based on a peak extraction mechanism, a first partially corrected set of points of the sheet of light is extracted. The first partially corrected set of points is transformed to obtain a first corrected set of points of the sheet of light that are corrected in a first and a second direction.Type: GrantFiled: August 8, 2019Date of Patent: September 14, 2021Assignee: Matrox Electronic Systems Ltd.Inventors: Vincent Zalzal, Christopher Hirst, Steve Massicotte, Jean-Sébastien Lemieux
-
Patent number: 11120306Abstract: Examples of techniques for adaptive object recognition for a target visual domain given a generic machine learning model are provided. According to one or more embodiments of the present invention, a computer-implemented method for adaptive object recognition for a target visual domain given a generic machine learning model includes creating, by a processing device, an adapted model and identifying classes of the target visual domain using the generic machine learning model. The method further includes creating, by the processing device, a domain-constrained machine learning model based at least in part on the generic machine learning model such that the domain-constrained machine learning model is restricted to recognize only the identified classes of the target visual domain. The method further includes computing, by the processing device, a recognition result based at least in part on combining predictions of the domain-constrained machine learning model and the adapted model.Type: GrantFiled: November 2, 2017Date of Patent: September 14, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Nirmit V. Desai, Dawei Li, Theodoros Salonidis
-
Patent number: 11115654Abstract: An encoder is disclosed which includes circuitry and memory. Using the memory, the circuitry, in a first operating mode, derives first motion vectors for a first block obtained by splitting a picture, and generates a prediction image corresponding to the first block, with a bi-directional optical flow flag settable to true, and by referring to spatial gradients of luminance generated based on the first motion vectors. Using the memory, the circuitry, in a second operating mode, derives second motion vectors for a sub-block obtained by splitting a second block, the second block being obtained by splitting the picture, and generates a prediction image corresponding to the sub-block, with the bi-directional optical flow flag set to false.Type: GrantFiled: June 15, 2020Date of Patent: September 7, 2021Assignee: Panasonic Intellectual Property Corporation of AmericaInventors: Kiyofumi Abe, Takahiro Nishi, Tadamasa Toma, Ryuichi Kanoh
-
Patent number: 11106936Abstract: Point clouds of objects are compared and matched using logical arrays based on the point clouds. The point clouds are azimuth aligned and translation aligned. The point clouds are converted into logical arrays for ease of processing. Then the logical arrays are compared (e.g. using the AND function and counting matches between the two logical arrays). The comparison is done at various quantization levels to determine which quantization level is likely to give the best object comparison result. Then the object comparison is made. More than two objects may be compared and the best match found.Type: GrantFiled: September 5, 2019Date of Patent: August 31, 2021Assignee: General AtomicsInventor: Zachary Bergen
-
Patent number: 11106898Abstract: Systems and methods allow a data labeler to identify an expression in an image of a labelee's face without being provided with the image. In one aspect, the image of the labelee's face is analyzed to identify facial landmarks. A labeler is selected from a database who has similar facial characteristics to the labelee. A geometric mesh is built of the labeler's face and the geometric mesh is deformed based on the facial landmarks identified from the image of the labelee. The labeler may identify the facial expression or emotion of the geometric mesh.Type: GrantFiled: March 19, 2019Date of Patent: August 31, 2021Assignee: Buglife, Inc.Inventors: Daniel DeCovnick, David Schukin
-
Patent number: 11090810Abstract: A robot system includes a robot including a robot arm and a first camera, a second camera installed separately from the robot, and a control device which controls the robot and the second camera. The first camera has already been calibrated in advance, and a first calibration data which is the calibration data between the coordinate system of the robot and the coordinate system of the first camera is known. The control device (i) images a calibration pattern with the first camera to acquire a first pattern image and images a calibration pattern with the second camera to acquire a second pattern image, and (ii) executes calibration process for obtaining a second calibration data which is the calibration data between the coordinate system of the robot and the coordinate system of the second camera using the first pattern image, the second pattern image, and the first calibration data.Type: GrantFiled: October 10, 2018Date of Patent: August 17, 2021Assignee: Seiko Epson CorporationInventor: Tomoki Harada
-
Patent number: 11004191Abstract: An image recognition processor for an industrial device, the image recognition processor implementing, on an integrated circuit thereof, the functions of storing an image data processing algorithm, which has been determined based on prior learning; acquiring image data of an image including a predetermined pattern; and performing recognition processing on the image data based on the image data processing algorithm to output identification information for identifying a recognized pattern.Type: GrantFiled: June 14, 2019Date of Patent: May 11, 2021Assignee: KABUSHIKI KAISHA YASKAWA DENKIInventor: Masaru Adachi
-
Patent number: 10997233Abstract: In some examples, a computing device refines feature information of query text. The device repeatedly determines attention information based at least in part on feature information of the image and the feature information of the query text, and modifies the feature information of the query text based at least in part on the attention information. The device selects at least one of a predetermined plurality of outputs based at least in part on the refined feature information of the query text. In some examples, the device operates a convolutional computational model to determine feature information of the image. The device network computational models (NCMs) to determine feature information of the query and to determine attention information based at least in part on the feature information of the image and the feature information of the query. Examples include a microphone to detect audio corresponding to the query text.Type: GrantFiled: April 12, 2016Date of Patent: May 4, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Xiaodong He, Li Deng, Jianfeng Gao, Alex Smola, Zichao Yang
-
Patent number: 10984224Abstract: A face detection method includes scaling an input image to images of various sizes according to certain proportions by means of an image pyramid; passing the resultant images through a first-level network in a sliding window manner to predict face coordinates, face confidences, and face orientations; filtering out the most negative samples by confidence rankings and sending the remaining image patches to a second-level network. Through a second-level network, filtering out non-face samples; applying a regression to obtain more precise position coordinates and providing prediction results of the face orientations. Through an angle arbitration mechanism, combining the prediction results of the preceding two networks to make a final arbitration for a rotation angle of each sample, rotating each of the image patches upright according to the arbitration result made by the angle arbitration mechanism and sending to a third-level network for fine-tuning to predict positions of keypoints.Type: GrantFiled: December 26, 2019Date of Patent: April 20, 2021Assignee: ZHUHAI EEASY TECHNOLOGY CO., LTD.Inventors: Xucheng Yin, Bowen Yang, Chun Yang
-
Patent number: 10909769Abstract: The mixed reality based 3D sketching device includes a processor; and a memory connected to the processor. The memory stores program commands that are executable by the processor to periodically tracks a marker pen photographed through a camera, to determine whether to remove a third point using a distance between a first point corresponding to a reference point, among points that are sequentially tracked, and a second point at a current time, a preset constant, and an angle between the first point, the second point, and the previously identified third point, to search an object model corresponding to a 3D sketch that has been corrected, after correction is completed depending on the removal of the third point, and to display the searched object model on a screen.Type: GrantFiled: March 6, 2020Date of Patent: February 2, 2021Assignee: INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITYInventors: Soo Mi Choi, Jun Han Kim, Je Wan Han
-
Patent number: 10896493Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.Type: GrantFiled: November 13, 2018Date of Patent: January 19, 2021Assignee: ADOBE INC.Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
-
Patent number: 10783658Abstract: A method of processing an image including pixels distributed in cells and in blocks is disclosed, the method including the steps of: a) for each cell, generating n first intensity values of gradients having different orientations, each first value being a weighted sum of the values of the pixels of the cell; b) for each cell, determining a main gradient orientation of the cell and a second value representative of the intensity of the gradient in the main orientation; c) for each block, generating a descriptor of n values respectively corresponding, for each of the n gradient orientations, to the sum of the second values of the cells of the block having the gradient orientation considered as the main gradient orientation.Type: GrantFiled: July 10, 2018Date of Patent: September 22, 2020Assignee: COMMISSARIAT À L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVESInventors: Camille Dupoiron, William Guicquero, Gilles Sicard, Arnaud Verdant
-
Patent number: 10765864Abstract: The present disclosure provides a method and device for filtering sensor data. Signals from an array of sensor pixels are received and checked for changes in pixel values. Motion is detected based on the changes in pixel values, and motion output signals are transmitted to a processing station. If the sum of correlated changes in pixel values across a predetermined field of view exceeds a predetermined value, indicating sensor jitter, the motion output signals are suppressed. If a sum of motion values within a defined subsection of the field of view exceeds a predetermined threshold, indicating the presence of a large object of no interest, the motion output signals are suppressed for that subsection.Type: GrantFiled: August 9, 2018Date of Patent: September 8, 2020Assignee: National Technology & Engineering Solutions of Sandia, LLCInventors: Frances S. Chance, Christina E. Warrender
-
Patent number: 10753881Abstract: Systems and methods suitable for capable of autonomous crack detection in surfaces by analyzing video of the surface. The systems and methods include the capability to produce a video of the surfaces, the capability to analyze individual frames of the video to obtain surface texture feature data for areas of the surfaces depicted in each of the individual frames, the capability to analyze the surface texture feature data to detect surface texture features in the areas of the surfaces depicted in each of the individual frames, the capability of tracking the motion of the detected surface texture features in the individual frames to produce tracking data, and the capability of using the tracking data to filter non-crack surface texture features from the detected surface texture features in the individual frames.Type: GrantFiled: May 26, 2017Date of Patent: August 25, 2020Assignee: Purdue Research FoundationInventors: Mohammad Reza Jahanshahi, Fu-Chen Chen