Feature Extraction Patents (Class 382/190)
- Slice codes (Class 382/196)
- Directional codes and vectors (e.g., Freeman chains, compasslike codes) (Class 382/197)
- Pattern boundary and edge measurements (Class 382/199)
- Point features (e.g., spatial coordinate descriptors) (Class 382/201)
- Linear stroke analysis (e.g., limited to straight lines) (Class 382/202)
- Shape and form analysis (Class 382/203)
- Local neighborhood operations (e.g., 3x3 kernel, window, or matrix operator) (Class 382/205)
-
Patent number: 11551059Abstract: A modulated segmentation system can use a modulator network to emphasize spatial prior data of an object to track the object across multiple images. The modulated segmentation system can use a segmentation network that receives spatial prior data as intermediate data that improves segmentation accuracy. The segmentation network can further receive visual guide information from a visual guide network to increase tracking accuracy via segmentation.Type: GrantFiled: November 15, 2018Date of Patent: January 10, 2023Assignee: Snap Inc.Inventors: Linjie Yang, Jianchao Yang, Xuehan Xiong, Yanran Wang
-
Patent number: 11532095Abstract: There is provided with an information processing apparatus. An acquisition unit acquires a plurality of pattern discrimination results each indicating a location of a pattern that is present in an image. A selection unit selects a predetermined number of pattern discrimination results from the plurality of pattern discrimination results. A determination unit determines whether or not the selected predetermined number of pattern discrimination results are to be merged, based on a similarity of the locations indicated by the predetermined number of pattern discrimination results. A merging unit merges the predetermined number of pattern discrimination results for which it was determined by the determination unit that merging is to be performed. A control unit controls the selection unit, the determination unit, and the merging unit to repeatedly perform respective processes.Type: GrantFiled: May 26, 2020Date of Patent: December 20, 2022Assignee: Canon Kabushiki KaishaInventor: Tsewei Chen
-
Patent number: 11527242Abstract: A lip-language identification method and an apparatus thereof, an augmented reality device and a storage medium. The lip-language identification method includes: acquiring a sequence of face images for an object to be identified; performing lip-language identification based on a sequence of face images so as to determine semantic information of speech content of the object to be identified corresponding to lip actions in a face image; and outputting the semantic information.Type: GrantFiled: April 24, 2019Date of Patent: December 13, 2022Assignee: Beijing BOE Technology Development Co., Ltd.Inventors: Naifu Wu, Xitong Ma, Lixin Kou, Sha Feng
-
Patent number: 11499773Abstract: A refrigerator according to the present invention comprises: a storage chamber for storing articles; a camera for photographing the inner space of the storage chamber; a control part which visually recognizes a first article image captured by the camera so as to acquire article information corresponding to the first article image; a memory for storing the acquired article information so as to generate an article image history; and a display electrically connected to the control part. Further, the control part may: acquire, through the camera, a second article image in which an article is partially hidden by any other article; detect, from the second article image, a partial article image of the article partially hidden by the other article; and identify an article matching the partial article image on the basis of the article image history.Type: GrantFiled: March 29, 2019Date of Patent: November 15, 2022Assignee: LG ELECTRONICS INC.Inventor: Jichan Maeng
-
Patent number: 11501413Abstract: Embodiments are disclosed for generating lens blur effects. The disclosed systems and methods comprise receiving a request to apply a lens blur effect to an image, the request identifying an input image and a first disparity map, generating a plurality of disparity maps and a plurality of distance maps based on the first disparity map, splatting influences of pixels of the input image using a plurality of reshaped kernel gradients, gathering aggregations of the splatted influences, and determining a lens blur for a first pixel of the input image in an output image based on the gathered aggregations of the splatted influences.Type: GrantFiled: November 17, 2020Date of Patent: November 15, 2022Assignee: Adobe Inc.Inventors: Haiting Lin, Yumin Jia, Jen-Chan Chien
-
Patent number: 11495231Abstract: A lip language recognition method, applied to a mobile terminal having a sound mode and a silent mode, includes: training a deep neural network in the sound mode; collecting a user's lip images in the silent mode; and identifying content corresponding to the user's lip images with the deep neural network trained in the sound mode. The method further includes: switching from the sound mode to the silent mode when a privacy need of the user arises.Type: GrantFiled: November 26, 2018Date of Patent: November 8, 2022Assignee: BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Lihua Geng, Xitong Ma, Zhiguo Zhang
-
Patent number: 11494590Abstract: An apparatus comprising memory configured to store data to be machine-recognized (710), and at least one processing core configured to run an adaptive boosting machine learning algorithm with the data, wherein a plurality of learning algorithms are applied, wherein a feature space is partitioned into bins, wherein a distortion function is applied to features of the feature space (720), and wherein a first derivative of the distortion function is not constant (730).Type: GrantFiled: February 2, 2016Date of Patent: November 8, 2022Assignee: Nokia Technologies OYInventor: Chubo Shang
-
Patent number: 11496333Abstract: Presented herein is an audio reaction system and method for virtual/online meeting platforms where a participant provides a reaction (applause, laughter, wow, etc.) to something the presenter said or did. The trigger is a reaction feature in which participants press an emoticon button in a user interface or active some other user interface function to initiate message indicating a reaction.Type: GrantFiled: September 24, 2021Date of Patent: November 8, 2022Assignee: CISCO TECHNOLOGY, INC.Inventor: Tore Bjølseth
-
Patent number: 11495125Abstract: A system comprises a computer including a processor, and a memory. The memory stores instructions such that the processor is programmed to determine two or more clusters of vehicle operating parameter values from each of a plurality of vehicles at a location within a time. Determining the two or more clusters includes clustering data from the plurality of vehicles based on proximity to two or more respective means. The processor is further programmed to determine a reportable condition when a mean for a cluster representing a greatest number of vehicles varies from a baseline by more than a threshold.Type: GrantFiled: March 1, 2019Date of Patent: November 8, 2022Assignee: Ford Global Technologies, LLCInventors: Linjun Zhang, Juan Enrique Castorena Martinez, Codrin Cionca, Mostafa Parchami
-
Patent number: 11488352Abstract: Various implementations disclosed herein include devices, systems, and methods for modeling a geographical space for a computer-generated reality (CGR) experience. In some implementations, a method is performed by a device including a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, the method includes obtaining a set of images. In some implementations, the method includes providing the set of images to an image classifier that determines whether the set of images correspond to a geographical space. In some implementations, the method includes establishing correspondences between at least a subset of the set of images in response to the image classifier determining that the subset of images correspond to the geographical space. In some implementations, the method includes synthesizing a model of the geographical space based on the correspondences between the subset of images.Type: GrantFiled: January 20, 2020Date of Patent: November 1, 2022Assignee: APPLE INC.Inventor: Daniel Kurz
-
Patent number: 11481975Abstract: An image processing method and apparatus, and a computer-readable storage medium are provided. The method includes: determining a first region matching a target object in a first image; determining a deformation parameter based on a preset deformation effect, the deformation parameter being used for determining a position deviation, generated based on the preset deformation effect, of each pixel point of the target object; and performing deformation processing on the target object in the first image based on the deformation parameter to obtain a second image.Type: GrantFiled: October 19, 2020Date of Patent: October 25, 2022Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Yuanzhen Hao, Mingyang Huang, Jianping Shi
-
Patent number: 11475537Abstract: A processor-implemented image normalization method includes extracting a first object patch from a first input image and extracting a second object patch from a second input image based on an object area that includes an object detected from any one or any combination of the first input image and the second input image, determining, based on a first landmark detected from the first object patch, a second landmark of the second object patch; and normalizing the first object patch and the second object patch based on the first landmark and the second landmark.Type: GrantFiled: March 15, 2021Date of Patent: October 18, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Seungju Han, Minsu Ko, Changyong Son, Jaejoon Han
-
Patent number: 11475684Abstract: An image may be evaluated by a computer vision system to determine whether it is fit for analysis. The computer vision system may generate an embedding of the image. An embedding quality score (EQS) of the image may be determined based on the image's embedding and a reference embedding associated with a cluster of reference noisy images. The quality of the image may be evaluated based on the EQS of the image to determine whether the quality meets filter criteria. The image may be further processed when the quality is sufficient, or otherwise the image may be removed.Type: GrantFiled: March 25, 2020Date of Patent: October 18, 2022Assignee: Amazon Technologies, Inc.Inventors: Siqi Deng, Yuanjun Xiong, Wei Li, Shuo Yang, Wei Xia, Meng Wang
-
Patent number: 11475240Abstract: Embodiments relate to generating keypoint descriptors of the keypoints. An apparatus includes a pyramid image generator circuit and a keypoint descriptor generator circuit. The pyramid image generator circuit generates an image pyramid from an input image. The keypoint descriptor generator circuit determines intensity values of sample points in the pyramid images for a keypoint and determines comparison results of comparisons between the intensity values of pairs of the sample points. The keypoint descriptor generator circuit generate bit values defining the comparison results for the keypoint, each bit value corresponding with one of the comparison results, and generate a sequence of the bit values defining an ordering of the comparison results based on importance levels of the comparisons, where the importance level of each comparison defines how much the comparison is representative of features. Bit values for comparisons having the lowest importance levels may be excluded from the sequence.Type: GrantFiled: March 19, 2021Date of Patent: October 18, 2022Assignee: Apple Inc.Inventors: Liran Fishel, Assaf Metuki, Chuhan Min, Wai Yu Trevor Tsang
-
Patent number: 11475711Abstract: A non-transitory computer-readable recording medium stores therein a judgment program that causes a computer to execute a process including acquiring a captured image including a face to which a plurality of markers are attached at a plurality of positions that are associated with a plurality of action units, specifying each of the positions of the plurality of markers included in the captured image, judging an occurrence intensity of a first action unit associated with a first marker from among the plurality of action units based on a judgment criterion of an action unit and a position of the first marker from among the plurality of markers, and outputting the occurrence intensity of the first action unit by associating the occurrence intensity with the captured image.Type: GrantFiled: December 14, 2020Date of Patent: October 18, 2022Assignee: Fujitsu LimitedInventors: Akiyoshi Uchida, Junya Saito, Akihito Yoshii
-
Patent number: 11462002Abstract: Disclosed are a wallpaper management method and apparatus, a mobile terminal, and a storage medium. The method includes: determining a wallpaper to be switched; obtaining feature information of the wallpaper to be switched, and comparing the feature information of the wallpaper to be switched with the feature information of wallpapers in a feature database, to determine, in the feature database, a wallpaper matching the wallpaper to be switched; and performing wallpaper switching according to feature information corresponding to the matching wallpaper.Type: GrantFiled: July 25, 2019Date of Patent: October 4, 2022Assignee: ZTE CorporationInventor: Lan Luan
-
Patent number: 11462112Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.Type: GrantFiled: February 11, 2020Date of Patent: October 4, 2022Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
-
Patent number: 11457200Abstract: Aspects of the subject disclosure may include, for example, a device, that includes a processing system including a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations including receiving a manifest for a point cloud, wherein the point cloud is partitioned into a plurality of cells; determining an occlusion level for a cell of the plurality of cells with respect to a predicted viewport; reducing a point density for the cell provided in the manifest based on the occlusion level, thereby determining a reduced point density; and requesting delivery of points in the cell, based on the reduced point density. Other embodiments are disclosed.Type: GrantFiled: March 20, 2020Date of Patent: September 27, 2022Assignee: AT&T Intellectual Property I, L.P.Inventors: Bo Han, Cheuk Yiu Ip, Jackson Jarrell Pair
-
Patent number: 11451718Abstract: Alternating Current (AC) light sources can cause images captured using a rolling shutter to include alternating darker and brighter regions—known as flicker bands—due to some sensor rows being exposed to different intensities of light than others. Flicker bands may be compensated for by extracting them from images that are captured using exposures that at least partially overlap in time. Due to the overlap, the images may be subtracted from each other so that scene content substantially cancels out, leaving behind flicker bands. The images may be for a same frame captured by at least one sensor, such as different exposures for a frame. For example, the images used to extract flicker bands may be captured using different exposure times that share a common start time, such as using a multi-exposure sensor where light values are read out at different times during light integration.Type: GrantFiled: March 12, 2021Date of Patent: September 20, 2022Assignee: NVIDIA CorporationInventor: Hugh Phu Nguyen
-
Patent number: 11439218Abstract: A system and method for dermal spraying includes a portable, hand-held dermal application device with disposable formulation capsules that spray a formulation unto the skin and a data transmission unit operatively connecting the dermal spray device to a mobile device. The mobile device communicates with a remote server and transmits anonymized data about the user's skin conditions and treatment history. The anonymized data may be labelled and classified by a dermatologist, and stored on a secure cloud server. The anonymized data is used to train models for serum and treatment plan recommenders.Type: GrantFiled: December 10, 2021Date of Patent: September 13, 2022Assignee: KOZHYA LLC SP. Z O.O.Inventors: Yoanna A. Gouchtchina, Enrique Gallar
-
Patent number: 11443535Abstract: A license plate identification method is provided, including steps of: obtaining a to-be-processed image including all characters on a license plate; extracting several feature maps corresponding to character features of the to-be-processed image through a feature map extraction module; for each of the characters, extracting a block and a coordinate according to the feature maps through a character identification model based on a neural network; and obtaining a license plate identification result according to the respective blocks and the respective coordinates of the characters.Type: GrantFiled: January 21, 2019Date of Patent: September 13, 2022Assignee: DELTA ELECTRONICS, INC.Inventors: Yu-Ta Chen, Feng-Ming Liang, Jing-Hong Jheng
-
Patent number: 11443385Abstract: A method includes receiving a digital file from a customer, extracting metadata from the file, and verifying the metadata prior to accepting the digital file. The method may include verifying that a representation of a required physical token appears in the digital file.Type: GrantFiled: April 6, 2020Date of Patent: September 13, 2022Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANYInventors: Thomas A. McCall, William D. Bryant, John W. S. Riney, Christopher E. Gay
-
Patent number: 11443536Abstract: Systems, apparatuses, and methods for efficiently and accurately processing an image in order to detect and identify one or more objects contained in the image, and methods that may be implemented on mobile or other resource constrained devices. Embodiments of the invention introduce simple, efficient, and accurate approximations to the functions performed by a convolutional neural network (CNN); this is achieved by binarization (i.e., converting one form of data to binary values) of the weights and of the intermediate representations of data in a convolutional neural network. The inventive binarization methods include optimization processes that determine the best approximations of the convolution operations that are part of implementing a CNN using binary operations.Type: GrantFiled: June 3, 2019Date of Patent: September 13, 2022Assignee: The Allen Institute for Artificial IntelligenceInventors: Ali Farhadi, Mohammad Rastegari, Vicente Ignacio Ordonez Roman
-
Patent number: 11436290Abstract: Systems and methods are provided to process a digital photo and other media. An apparatus to process digital photos can include a tangibly embodied computer processor (CP) and a tangibly embodied database. The CP can perform processing including: (a) inputting a photo from a user device, and the photo including geographic data that represents a photo location at which the photo was generated; (b) comparing at least one area with the photo location and associating an area identifier to the photo as part of photo data; and (c) performing processing based on the area identifier and the photo data. Processing can provide for (a) processing media with geographical segmentation; (b) processing media in a geographical area, based on media density; (c) crowd based censorship of media; and (d) filtering media content based on user perspective, that can be for comparison, validation and voting, for example.Type: GrantFiled: March 12, 2021Date of Patent: September 6, 2022Assignee: ShotSpotz LLCInventors: Harley Bernstein, John Morgan, Jeff Frederick
-
Patent number: 11436760Abstract: An electronic apparatus obtains a panoramic image by overlapping partial areas of image frames and identifies an object from the panoramic image or an area of a predetermined shape of maximum size within the panoramic image.Type: GrantFiled: September 20, 2019Date of Patent: September 6, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Wonwoo Lee, Taehyuk Kwon, Deokho Kim, Byeongwook Yoo, Gunill Lee, Jaewoong Lee, Sunghoon Yim, Jiwon Jeong
-
Patent number: 11436754Abstract: A position posture identification device includes: an actual measurement data acquisition unit which acquires actual measurement data on the shape of a workpiece; a virtual measurement data generation unit which generates, from shape data defined for the workpiece, virtual measurement data; a filter processing unit which performs, based on the measurement property of the three-dimensional measuring machine, affine transformation on the virtual measurement data; a feature point extraction unit which extracts feature point data from the actual measurement data and the virtual measurement data; a storage unit which stores, as model data of the workpiece, the feature point data extracted from the virtual measurement data; and a position posture calculation unit which checks the feature point data of the actual measurement data against data obtained by performing coordinate transformation on the feature point data included in the model data so as to calculate the position posture of the workpiece.Type: GrantFiled: July 15, 2020Date of Patent: September 6, 2022Assignee: FANUC CORPORATIONInventor: Taiga Satou
-
Patent number: 11430169Abstract: Systems and methods generating an animation rig corresponding to a pose of a subject include accessing image data corresponding to the pose of the subject. The image data can include the face of the subject. The systems and methods process the image data by successively analyzing subregions of the image according to a solver order. The solver order can be biologically or anatomically ordered to proceed from subregions that cause larger scale movements to subregions that cause smaller scale movements. In each subregion, the systems and methods can perform an optimization technique to fit parameters of the animation rig to the input image data. After all subregions have been processed, the animation rig can be used to animate an avatar to appear to be performing the pose of the subject.Type: GrantFiled: March 7, 2019Date of Patent: August 30, 2022Assignee: Magic Leap, Inc.Inventors: Sean Michael Comer, Geoffrey Wedig
-
Patent number: 11431893Abstract: An imaging apparatus includes: an image sensor that captures an image of an object to generate a captured image; a display that displays the captured image; a first detector that detects, when the object is a human, at least a portion of the human; a second detector that detects, when the object is an animal, at least a portion of the animal; and a controller that controls the display to display a first detection frame and a second detection frame on the captured image, the first detection frame corresponding to the human and the second detection frame corresponding to the animal, wherein the controller controls the display to display the first and second detection frames in a common displaying style when neither the first detection frame nor the second detection frame is a third detection frame corresponding to a main object of the objects.Type: GrantFiled: September 20, 2019Date of Patent: August 30, 2022Assignee: Panasonic Intellectual Property Management Co., Ltd.Inventor: Mitsuyoshi Okamoto
-
Patent number: 11423909Abstract: An augmented reality (AR) device can be configured to monitor ambient audio data. The AR device can detect speech in the ambient audio data, convert the detected speech into text, or detect keywords such as rare words in the speech. When a rare word is detected, the AR device can retrieve auxiliary information (e.g., a definition) related to the rare word from a public or private source. The AR device can display the auxiliary information for a user to help the user better understand the speech. The AR device may perform translation of foreign speech, may display text (or the translation) of a speaker's speech to the user, or display statistical or other information associated with the speech.Type: GrantFiled: February 14, 2020Date of Patent: August 23, 2022Assignee: Magic Leap, Inc.Inventors: Jeffrey Scott Sommers, Jennifer M. R. Devine, Joseph Wayne Seuck, Adrian Kaehler
-
Patent number: 11410460Abstract: A facial recognition system may monitor a light source that causes a unique pattern of light to be projected on the subject during image capture. The lighting pattern may include intensity, color, source location, pattern, modulation, or combinations of these. The lighting pattern may also encode a signal used to further identify a location, time, or identity associated with the facial recognition process. In some embodiments the light source may be infrared or another frequency outside the visible spectrum but within the detection range of a sensor capturing the image.Type: GrantFiled: March 2, 2018Date of Patent: August 9, 2022Assignee: VISA INTERNATIONAL SERVICE ASSOCIATIONInventor: Thomas Purves
-
Patent number: 11397765Abstract: An image retrieval system receives an image for which to identify relevant images from an image repository. Relevant images may be of the same environment or object and features and other characteristics. Images in the repository are represented in an image retrieval graph by a set of image nodes connected by edges to other related image nodes with edge weights representing the similarity of the nodes to each other. Based on the received image, the image traversal system identifies an image in the image retrieval graph and alternatively explores and traverses (also termed “exploits”) the image nodes with the edge weights. In the exploration step, image nodes in an exploration set are evaluated to identify connected nodes that are added to a traversal set of image nodes. In the traversal step, the relevant nodes in the traversal set are added to the exploration set and a query result set.Type: GrantFiled: October 3, 2019Date of Patent: July 26, 2022Assignee: The Toronto-Dominion BankInventors: Maksims Volkovs, Cheng Chang, Guangwei Yu, Chundi Liu
-
Patent number: 11397503Abstract: An addressable media system for performing operations that include: accessing image data that depicts an object in an environment at a client device; causing display of a presentation of the image data within a graphical user interface at the client device; detecting the display of the object within the presentation of the image data based on at least a portion of the plurality of image features of the display of the object; identifying an object class based on at least the portion of the image features of the display of the object; receiving an input that selects the display of the object from the client device; and associating the object class that corresponds with the object with the user profile in response to the input that selects the display of the object.Type: GrantFiled: June 28, 2019Date of Patent: July 26, 2022Assignee: Snap Inc.Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan
-
Patent number: 11393218Abstract: Embodiments of the disclosure provide an object detection method and device. The object detection method includes: extracting features of an image; classifying the image by each level of classifiers of a cascade classifier according to the features of the image, and calculating a classification score of the image in each level of the classifiers of the cascade classifier according to a classification result; and calculating, according to the classification score, a cascade score of the image in a corresponding level of the cascade classifier, comparing the cascade score in the corresponding level with a cascade threshold of the corresponding level, and judging the presence of an object in the image according to a comparison result.Type: GrantFiled: March 14, 2018Date of Patent: July 19, 2022Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Xiaojun Tang, Haijun Su
-
Patent number: 11393248Abstract: Disclosed are a data detection method and device, a computer equipment, and a storage medium. The method includes: obtaining a designated identification picture including a human face; correcting the designated identification picture to be placed in a preset standard posture to obtain an intermediate picture; inputting the intermediate picture into a preset face feature point detection model to obtain multiple face feature points; calculating a cluster center position of the face feature points, and generating a minimum bounding rectangle of the face feature points; retrieving a standard identification picture from a preset database; scaling the standard identification picture in proportion to obtain a scaled picture; overlapping a reference center position in the scaled picture and a cluster center position in the intermediate picture, so as to obtain an overlapping part in the intermediate picture; and marking the overlapping part as an identification body of the designated identification picture.Type: GrantFiled: June 29, 2020Date of Patent: July 19, 2022Assignee: PING AN TECHNOLOGY (SHENZHEN) CO., LTD.Inventor: Jinlun Huang
-
Patent number: 11392347Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio messaging interface for messaging platform. One of the methods includes receiving, by a first client on a first user device, a request to record an audio message, wherein the first client is configured to provide a user interface for the platform for a user using the first user device who is logged in to a user account on the platform; recording audio through a microphone of the first user device; generating a platform message by (i) generating a video file that includes the recorded audio as an audio portion of the video file and programmatically generated minimal video content as a video portion of the video file, and (ii) including the video file in the platform message; and posting, by the first client, the platform message to the platform, in response to a post request.Type: GrantFiled: June 17, 2020Date of Patent: July 19, 2022Assignee: Twitter, Inc.Inventors: Richard Plom, Reed Martin, Max Rose
-
Patent number: 11393186Abstract: The present disclosure provides a detection apparatus and method, and image processing apparatus and system. The detection apparatus extracts features from an image, detects objects in the image based on the extracted features; and detects key points of the detected objects based on the extracted features, the detected objects and a pre-obtained key point sets. According to the present disclosure, the whole detection speed can be ensured not to be influenced by the number of objects in the image to be detected while the objects and key points thereof are detected, so as to better meet the requirements of timeliness and practicability of the detection by the actual computer vision task.Type: GrantFiled: February 24, 2020Date of Patent: July 19, 2022Assignee: CANON KABUSHIKI KAISHAInventors: Yaohai Huang, Zhiyuan Zhang
-
Patent number: 11386699Abstract: An image processing method, an apparatus, a storage medium, and an electronic device are provided. The image processing method comprises: identifying a human face area in a target image (101); determining a local area to be processed from the human face area on the basis of a trained convolutional neural network model (102); obtaining posture information of a human face in the target image (103); selecting a target sample human face image from a human face image data base according to the posture information (104); and correcting the local area according to the target sample human face image (105).Type: GrantFiled: June 12, 2020Date of Patent: July 12, 2022Assignee: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., LTD.Inventors: Yan Chen, Yaoyong Liu
-
Patent number: 11386710Abstract: The present invention is directed to positioning a plurality of eye feature points in the target image to determine position coordinates of the plurality of eye feature points normalizing the position coordinates of the plurality of eye feature points to obtain normalized position feature data and determining an eye state in the target image based on the position feature data.Type: GrantFiled: November 30, 2018Date of Patent: July 12, 2022Assignee: BOE TECHNOLOGY GROUP CO., LTD.Inventor: Chu Xu
-
Patent number: 11388187Abstract: A method of digital signal feature extraction comprises steps of: (a) segmenting samples of the digital signal to form a set of groupings each comprising a subset of the samples, with each grouping having endpoints spaced apart by a current grouping size; (b) applying an operator, which is associated with the desired feature to be extracted, to the subset of the samples of each grouping to derive a representative value therefor corresponding to the grouping size; and (c) repeating step a), but based on a different grouping size, and repeating step b) on the set of groupings formed based on the different grouping size, with the operator being adapted to correspond to the different grouping size. The set of groupings formed in step a) collectively includes all of the samples of the signal. One endpoint of at least one grouping is intermediate the endpoints of another one of the groupings.Type: GrantFiled: May 31, 2019Date of Patent: July 12, 2022Assignee: University of ManitobaInventors: Jesus David Terrazas Gonzalez, Witold Kinsner
-
Patent number: 11380080Abstract: An object recognition ingestion system is presented. The object ingestion system captures image data of objects, possibly in an uncontrolled setting. The image data is analyzed to determine if one or more a priori know canonical shape objects match the object represented in the image data. The canonical shape object also includes one or more reference PoVs indicating perspectives from which to analyze objects having the corresponding shape. An object ingestion engine combines the canonical shape object along with the image data to create a model of the object. The engine generates a desirable set of model PoVs from the reference PoVs, and then generates recognition descriptors from each of the model PoVs. The descriptors, image data, model PoVs, or other contextually relevant information are combined into key frame bundles having sufficient information to allow other computing devices to recognize the object at a later time.Type: GrantFiled: September 30, 2020Date of Patent: July 5, 2022Assignee: NANT HOLDINGS IP, LLCInventors: Kamil Wnuk, David McKinnon, Jeremi Sudol, Bing Song, Matheen Siddiqui
-
Patent number: 11379650Abstract: The present disclosure provides systems and methods displaying and formatting text on an electronic display. A gesture input may be received via a gesture input device associated with the electronic display. For instance, a touchscreen may receive a touch gesture input. Each of a plurality of gesture inputs may be associated with a formatting rule and/or a text-component for selecting a portion of displayed text. Selected text may be formatted according to the formatting rule associated with the received gesture input. The formatted text may be displayed on the electronic display. A data store may associate each of the plurality of gesture inputs with a formatting rule that can be applied to selected text. Alternatively, a data store may associate each of the plurality of gesture inputs with a formatting rule and a text-component that defines to which component of text the formatting rule should be applied.Type: GrantFiled: November 22, 2019Date of Patent: July 5, 2022Assignee: WETRANSFER B.V.Inventors: Julian Walker, Ian Curry
-
Patent number: 11379979Abstract: A computer system includes an input configured to receive a first image of medication located in a receptacle, memory, and a processor configured to execute instructions including creating a second image based on the first image, dividing pixels of the second image into first and second subsets, and scanning the second image along a first axis to count, for each point along the first axis, a number of pixels in the first subset along a line perpendicular to the first axis that intersects the first axis at the point. The instructions also include estimating positions of first and second edges of the receptacle along the first axis based on the counts of the pixels, defining an opening of the receptacle based on the estimated positions of the first and second edges, and outputting a processed image that indicates areas of the image that are outside of the defined opening.Type: GrantFiled: August 20, 2020Date of Patent: July 5, 2022Assignee: Express Scripts Strategic Development, Inc.Inventors: Christopher R. Markson, Pritesh J. Shah, Christopher G. Lehmuth
-
Patent number: 11367198Abstract: Systems, methods and apparatuses for tracking at least a portion of a body by fitting data points received from a depth sensor and/or other sensors and/or “markers” as described herein to a body model. For example, in some embodiments, certain of such data points are identified as “super points,” and apportioned greater weight as compared to other points. Such super points can be obtained from objects attached to the body, including, but not limited to, active markers that provide a detectable signal, or a passive object, including, without limitation, headgear or a mask (for example for VR (virtual reality)), or a smart watch. Such super points may also be obtained from specific data points that are matched to the model, such as data points that are matched to vertices that correspond to joints in the model.Type: GrantFiled: July 28, 2019Date of Patent: June 21, 2022Assignee: MindMaze Holding SAInventors: Tej Tadi, Nicolas Fremaux, Jose Rubio, Jonas Ostlund, Max Jeanneret
-
Patent number: 11367311Abstract: A face recognition method includes generating a target face recognition model, performing face detection on an image to obtain a first face image, and performing face recognition on the first face image in the image by using the target face recognition model to obtain a first face feature. Generating the target face recognition model includes determining a training sample, the training sample comprising a training face image calibrated with identity information, and training a general face recognition model by using the training sample, and updating a parameter of the general face recognition model based on a training target to obtain the target face recognition model, the training target being a prediction result of the general face recognition model predicting identity information of a new training face image in the training sample to be the calibrated identity information of the training face image in the training sample.Type: GrantFiled: June 12, 2020Date of Patent: June 21, 2022Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Lianzhi Tan, Shichao Liu, Zhaoyong Zhang, Yiwei Pan, Wu Xia
-
Patent number: 11367263Abstract: Devices and techniques are generally described for image-guided three dimensional (3D) modeling. In various examples, a first two-dimensional (2D) image representing an object may be received. A first three-dimensional (3D) model corresponding to the first 2D image may be determined from among a plurality of 3D models. A first selection of a first portion of the first 2D image may be received. A second selection of a second portion of the first 3D model corresponding to the portion of the first 2D image may be received. At least one transformation of the first 3D model may be determined based at least in part on differences between a geometric feature of the first portion of the first 2D image and a geometric feature of the second portion of the first 3D model. A modified 3D model may be generated by applying the at least one transformation to the first 3D model.Type: GrantFiled: June 24, 2020Date of Patent: June 21, 2022Assignee: AMAZON TECHNOLOGIES, INC.Inventors: Frederic Laurent Pascal Devernay, Thomas Lund Dideriksen
-
Patent number: 11367265Abstract: In variants, the method for automatic debris detection includes: determining a region image; optionally determining a parcel representation for the region image; generating a debris representation using the region image; generating a debris score based on the debris representation; and optionally monitoring the debris score over time.Type: GrantFiled: October 15, 2021Date of Patent: June 21, 2022Assignee: Cape Analytics, Inc.Inventors: Giacomo Vianello, Robert Davis, John K. Clark, Jonathan M. Fisher
-
Patent number: 11367187Abstract: A method is for detecting respective potential presence of respective different antinuclear antibody fluorescence pattern types on a biological cell substrate including human epithelioma cells, including: acquiring a first image which represents staining of the cell substrate by a first fluorescent dye, and acquiring a second image which represents staining of the cell substrate by a second fluorescent dye, detecting, on the basis of the first image, respective image segments which in each case represent at least one mitotic cell, selecting, on the basis of the detected image segments, subimages of the first image and subimages corresponding thereto of the second image and detecting, on the basis of the selected subimages of the first image and of the selected subimages of the second image, respective actual presence of respective cellular fluorescence pattern types by means of a convolutional neural network.Type: GrantFiled: July 2, 2020Date of Patent: June 21, 2022Assignee: EUROIMMUN Medizinische Labordiagnostika AGInventors: Jens Krauth, Christopher Krause, Joern Voigt, Melanie Hahn, Christian Marzahl, Stefan Gerlach
-
Patent number: 11361532Abstract: Some implementations of the present disclosure are directed to a computer-implemented method that includes: detecting a plurality of characters from a photo of an object; generating a set of character-based features based on the plurality of characters; matching the set of character-based features with a template of feature sets obtained from known objects; based on a matching set of character-based features, establishing a matching transformation between the object in the photo and the template of feature sets; and projecting the matching transformation to the photo such that the object is segmented from the photo.Type: GrantFiled: April 30, 2020Date of Patent: June 14, 2022Assignee: Idemia Identity & Security USA LLCInventors: Brian K. Martin, Joseph Mayer, Rein-Lien Hsu
-
Patent number: 11361589Abstract: An image recognition method includes: performing image detection on an image to be recognized to obtain at least one face detection result, at least one operational part detection result, and at least one trunk detection result, each face detection result including one face bounding box, each operational part detection result including one operational part bounding box, and each trunk detection result including one trunk bounding box; respectively combining each of the at least one trunk detection result with each face detection result, to obtain at least one first result combination; respectively combining each trunk detection result with each operational part detection result, to obtain at least one second result combination; and associating the at least one first result combination with the at least one second result combination, to obtain an association result.Type: GrantFiled: October 14, 2020Date of Patent: June 14, 2022Assignee: SENSETIME INTERNATIONAL PTE. LTD.Inventors: Mingyuan Zhang, Jinyi Wu, Haiyu Zhao
-
Patent number: 11361540Abstract: A method and apparatus for predicting an object of interest of a user receives an input image of a visible region of a user and gaze information including a gaze sequence of the user, generates weight filters for a per-frame segmentation image by analyzing a frame of the input image for input characteristics of the per-frame segmentation image and the gaze information, and predicts an object of interest of the user by integrating the weight filters and applies the integrated weight filter to the per-frame segmentation image.Type: GrantFiled: July 17, 2020Date of Patent: June 14, 2022Assignees: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and TechnologyInventors: Seungin Park, Hyong Euk Lee, Sung Geun Ahn, Gee Hyuk Lee, Dae Hwa Kim, Keun Woo Park