Local Or Regional Features Patents (Class 382/195)
  • Patent number: 11430205
    Abstract: A method and an apparatus for detecting a salient object in an image includes separately performing convolution processing corresponding to at least two convolutional layers on a to-be-processed image to obtain at least two first feature maps of the to-be-processed image, performing superposition processing on at least two first feature maps included in a superposition set in at least two sets to obtain at least two second feature maps of the to-be-processed image, the at least two sets are in a one-to-one correspondence with the at least two second feature maps, and a resolution of a first feature map included in the superposition set is lower than or equal to a resolution of a second feature map corresponding to the superposition set, and splicing the at least two second feature maps to obtain a saliency map.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 30, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Qibin Hou, Mingming Cheng, Wei Bai, Xunyi Zhou
  • Patent number: 11417098
    Abstract: An apparatus including a location device and a processor. The location device may be configured to determine location coordinates of the apparatus. The processor may be configured to receive video frames captured by a capture device, perform video analysis on the video frames to detect objects in the video frames and extract metadata corresponding to the objects detected in the video frames, correlate the metadata with the location coordinates, determine a distance from the apparatus to the objects in the video frames and calculate an absolute location of the objects in response to the distance and the location coordinates. The distance may be determined by comparing a size of the objects detected in the video frames with a known size of the objects. The absolute location for the objects in the video frames may be added to the metadata.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: August 16, 2022
    Assignee: WAYLENS, INC.
    Inventor: Jeffery R. Campbell
  • Patent number: 11417173
    Abstract: The present disclosure relates to an image processing method and apparatus, an electronic device, and a storage medium. The method includes: obtaining video streams of a game tabletop; detecting target objects in a plurality of image frames included in the video streams; determining a current game stage based on the target objects; and determining game detecting results according to the target objects and the determined game stage.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: August 16, 2022
    Assignee: SENSETIME INTERNATIONAL PTE. LTD.
    Inventors: Yao Zhang, Wenbin Zhang, Shuai Zhang
  • Patent number: 11392625
    Abstract: A process for locating real estate parcels for a user comprises accessing a library of parceled real estate image data to identify objects and features in a plurality of parcels identified by the user as having a feature of interest. A predictive model is constructed and applied to a geographic region selected by the user to generate a customized output of real estate parcels predicted to have the feature of interest.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: July 19, 2022
    Assignee: OmniEarth, Inc.
    Inventors: Shadrian Strong, Lars Dyrud, David Murr
  • Patent number: 11393088
    Abstract: A method of recognizing animals includes recognizing a plurality of body parts of a plurality of animals based on at least one image of the animals, in which the plurality of body parts include a plurality of types of body parts, including determining first estimated positions of the recognized body parts in the at least one image. The method includes estimating a plurality of first associations of body parts based on the at least one image of the animals, each first association of body parts associates a body part of an animal with at least one other body part of the same animal, including determining relative positions of the body parts in each estimated first association of body parts in the at least one image.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: July 19, 2022
    Assignee: NUtech Ventures
    Inventors: Eric T. Psota, Lance C. Perez, Ty Schmidt, Benny Mote
  • Patent number: 11381730
    Abstract: Methods, systems, and devices for feature-based image autofocus are described. A device may perform an autofocus procedure that includes determining a set of features based on determining a feature region associated with an image. The device may generate a feature weight map based on the set of features and estimate a direction of a target feature in the feature region. The device may generate a direction weight map that corresponds to the feature region. The device may determine a focus position of the image based on the generated feature weight map and the estimated direction of the target feature and perform an autofocus operation on the determined focus position of the image. The device may calculate a focus value based on the feature weight map, and the focus position of the image may be based on the focus value.
    Type: Grant
    Filed: June 25, 2020
    Date of Patent: July 5, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Wen-Chun Feng, Hui Shan Kao
  • Patent number: 11368629
    Abstract: An image capturing apparatus includes: an image capturing unit that includes an imaging device and outputs an image signal obtained by image capturing of a subject by the imaging device through an image capturing optical system; a control unit that, in a case of controlling an exposure for each of three or more division regions obtained by dividing an image represented by the image signal and in a case of dividing the image into a plurality of segment regions that are different from the division regions and among which a segment region extends across a boundary between some of the division regions, controls an exposure of the segment region in accordance with information about at least one division region among the some of the division regions over which the segment region extends; and a display unit that displays the image for which the exposure is controlled by the control unit.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: June 21, 2022
    Assignee: FUJIFILM Corporation
    Inventors: Tomonori Masuda, Masahiko Sugimoto, Kosuke Irie
  • Patent number: 11367272
    Abstract: A target detection method and apparatus, in which the method includes: obtaining a target candidate region in a to-be-detected image; determining at least two part candidate regions from the target candidate region by using an image segmentation network, where each part candidate region corresponds to one part of a to-be-detected target; and extracting, from the to-be-detected image, local image features corresponding to the part candidate regions; and learning the local image features of the part candidate regions by using a bidirectional long short-term memory LSTM network, to obtain a part relationship feature used to describe a relationship between the part candidate regions; and detecting the to-be-detected target in the to-be-detected image based on the part relationship feature. As a result, image data processing precision in target detection can be improved, application scenarios of target detection can be diversified, and target detection accuracy can be improved.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: June 21, 2022
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Yi Yang, Yuhao Jiang, Maolin Chen, Shuang Yang
  • Patent number: 11361192
    Abstract: Embodiments of the present disclosure provide an image classification method for a computer device. The method includes obtaining an original image and a category of an object included in the original image; adjusting a display parameter of the original image to satisfy a value condition to obtain an adjusted original image; and transforming the display parameter of the original image according to a distribution condition that distribution of the display parameter needs to satisfy, to obtain a transformed image. The method also includes training a neural network model based on the category of the object and a training set constructed by the adjusted original image and the transformed image; and determining a category of an object included in a to-be-predicted image based on the trained neural network model.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: June 14, 2022
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Pai Peng, Xiaowei Guo, Kailin Wu
  • Patent number: 11363190
    Abstract: An image capturing method can be applied to an application, and include: determining a dimension adjustment approach set for a preset object in an acquired image to be processed after a dimension adjustment function of the application is turned on; and obtaining and displaying an adjusted image by adjusting a dimension of the preset object in the image to be processed according to the dimension adjustment approach, so as to support real-time preview of the adjusted image. As the dimension of the preset object in the image to be processed is adjusted in the image capturing process, the image capturing experience of the user is improved, and satisfying photos or videos can be obtained quickly.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: June 14, 2022
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Pan Yu, Yuelin Wu
  • Patent number: 11361506
    Abstract: Methods and systems for mapping images from a first dynamic range to a second dynamic range using a set of reference color-graded images and neural networks are described. Given a first and a second image representing the same scene but at a different dynamic range, a neural network (NN) model is selected from a variety of NN models to determine an output image which approximates the second image based on the first image and the second image. The parameters of the selected NN model are derived according to an optimizing criterion, the first image and the second image, wherein the parameters include node weights and/or node biases for nodes in the layers of the selected NN model. Example HDR to SDR mappings using global-mapping and local-mapping representations are provided.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: June 14, 2022
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Guan-Ming Su, Qing Song
  • Patent number: 11328178
    Abstract: A computer-implemented method of associating an annotation with an object in an image, comprising generating a dictionary including first vectors that associate terms of the annotation with concepts, classifying the image to generate a second vector based on classified objects and associated confidence scores for the classified objects, selecting a term of the terms associated with one of the first vectors having a shortest determined distance to the second vector, identifying a non-salient region of the image, and rendering the annotation associated with the selected term at the non-salient region.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: May 10, 2022
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: David Ayman Shamma, Lyndon Kennedy, Anthony Dunnigan
  • Patent number: 11321832
    Abstract: An image analysis device may obtain target image data representing a target image which is an analysis target, specify (m×n) partial images sequentially by scanning the target image data, wherein the (m×n) partial images are constituted of m partial images aligned along a first direction and n partial images aligned along a second direction, generate first probability data by using the (m×n) partial images and the first object data in the memory, and reduce the target image data so as to generate reduced image data. The image analysis device may execute image analysis according to a convolutional neural network by using the reduced image data as K pieces of channel data corresponding to K channels and using the first probability data as one piece of channel data corresponding to one channel, and output a result of the image analysis.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: May 3, 2022
    Inventor: Toru Nagasaka
  • Patent number: 11308657
    Abstract: Systems and methods are disclosed configured to train an autoencoder. A data training set is generated comprising images of different faces. A first autoencoder configuration is generated, comprising a first encoder, and a first decoder. The first autoencoder configuration is trained using dataset images, wherein weights associated with the first encoder and weights associated with the first decoder are modified. A second autoencoder configuration is generated comprising the first encoder and a second decoder. The second decoder is trained using a plurality of images of a first target face. First encoder weights are substantially maintained, and weights associated with the second decoder are modified. An autoencoder comprising the trained first encoder and the trained second decoder is used to generate an output using a source image of a first face having a facial expression, where the facial expression of the first face from the source image is applied to the first specific target face.
    Type: Grant
    Filed: August 11, 2021
    Date of Patent: April 19, 2022
    Assignee: Neon Evolution Inc.
    Inventors: Cody Gustave Berlin, Carl Davis Bogan, III, Kenneth Michael Lande, Anders Øland, Davide Toniolo, Alessia Bertugli, Dario Bertazioli, Brian Sung Lee
  • Patent number: 11304777
    Abstract: The present invention relates to a tool system comprising a handheld implement having a passive vectorized tracking marker permanently integrated with the implement at a predetermined location on the implement in a predetermined orientation with respect to the implement; and a database comprising geometric information describing the at least one of a rotationally asymmetric shape of the tracking marker and a rotationally asymmetric pattern disposed on the tracking marker. The system may further comprise: a tracker configured for obtaining image information about the tracking marker when the tracking marker is in a field of view of the tracker; and a controller having a processor and memory, the controller in communication with the database and the tracker, the processor programmable for receiving and processing image information from the tracker; accessing the database to retrieve the geometric information; and comparing the image information with the geometric information.
    Type: Grant
    Filed: May 12, 2019
    Date of Patent: April 19, 2022
    Assignee: Navigate Surgical Technologies, Inc
    Inventors: Ehud (Udi) Daon, Martin Beckett
  • Patent number: 11302031
    Abstract: The present disclosure relates to an indoor positioning method for operating an indoor positioning system and an indoor positioning apparatus by executing an artificial intelligence (AI) algorithm and/or a machine learning algorithm in a 5G environment connected for the Internet of Things. The indoor positioning method according to an embodiment of the present disclosure includes receiving map data and map information data of an indoor map in response to a presence of the indoor map of an indoor space, acquiring an image of the indoor space at a device camera, comparing image information of the indoor map with the acquired image information of the indoor space based on the map data and the map information data of the indoor space, and performing indoor localizing of the indoor space based on a result of the comparing.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: April 12, 2022
    Assignee: LG Electronics Inc.
    Inventors: Dong Heon Shin, Geong Hwan Yu, Hyun Soo Kim, Hyun Sang Park, Tae Kwon Kang
  • Patent number: 11301687
    Abstract: A pedestrian re-identification method includes: obtaining a target video containing a target pedestrian and at least one candidate video; encoding each target video segment in the target video and each candidate video segment in the at least one candidate segment separately; determining a score of similarity between the each target video segment and the each candidate video segment according to encoding results, the score of similarity being used for representing a degree of similarity between pedestrian features in the target video segment and the candidate video segment; and performing pedestrian re-identification on the at least one candidate video according to the score of similarity.
    Type: Grant
    Filed: December 25, 2019
    Date of Patent: April 12, 2022
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Dapeng Chen, Hongsheng Li, Tong Xiao, Shuai Yi, Xiaogang Wang
  • Patent number: 11294951
    Abstract: An information processing device includes a first obtaining unit that obtains, from an external information processing device, image information added to a captured image published for external access by the external information processing device under a preset obtainment condition, a first storage that stores the image information obtained by the obtaining unit, and a first controller that categorizes the image information stored in the first storage under a preset attribute condition. The image information includes at least positional information and time information each added to the captured image. The first controller categorizes the image information at least based on the positional information and the time information.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: April 5, 2022
    Assignees: MICWARE CO., LTD., KI PARTNERS INC.
    Inventors: Takashi Iwamoto, Makoto Ito, Sumito Yoshikawa
  • Patent number: 11281717
    Abstract: A framework for identifying prominent influencers/celebrities on social media webpages and customizing selection of thumbnails for presentation on social media webpages is provided. A most prominent influencer/celebrity associated with a social media webpage is determined. A presented thumbnail associated with each of a plurality of videos on a social media webpage is identified, each thumbnail including a picture/image of the most prominent influencer/celebrity. A feature vector is created for each identified presented thumbnail. Also identified is a plurality of potential thumbnails, each representative of a common video to be posted on the webpage and including an image of the identified most prominent influencer/celebrity. The feature vectors for the identified presented thumbnails and potential thumbnails are grouped into vector clusters. A centroid vector is identified for each vector cluster and a most prominent cluster is identified.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: March 22, 2022
    Assignee: ADOBE INC.
    Inventors: Sanjeev Tagra, Sachin Soni
  • Patent number: 11281899
    Abstract: Embodiments of the invention relate to a method for determining that an object in a sequence of images is a human. The method may include the steps of detecting an object in a first image from the sequence of images and assigning a first score to the object. The object is tracked to a second image from the sequence of images and a second score is assigned to the object in the second image. The second score is compared to a threshold that is inversely related to the first score and a determination that the object in the second image is a human is made based on the comparison.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: March 22, 2022
    Assignee: POINTGRAB LTD.
    Inventors: Ram Nathaniel, Moshe Nakash, Uri Zackhem
  • Patent number: 11270414
    Abstract: A method for generating a reduced-blur digital image representing a scene, the method being computer-implemented and comprising the following successive steps: i) providing at least two digital source images, a same element of the scene being represented in at least two source images, ii) selecting a reference image among the source images, iii) for at least one source image different from the reference image, and for at least one pixel of the reference image, a) defining a pattern in the reference image comprising pixels of the reference image, the element being represented in said pattern, b) constructing a map of coordinates that associates coordinates of the pattern in the reference image with the coordinates of the most similar pattern in the source image, c) optionally, filtering of the map of coordinates, and d) generating a corrected image by assigning to a pixel of the corrected image, the position of the pixel of the reference image and a color extracted from the source image point which position i
    Type: Grant
    Filed: August 29, 2019
    Date of Patent: March 8, 2022
    Assignee: INSTITUT MINES TELECOM
    Inventors: Cristian Felipe Ocampo, Yann Gousseau, Said Ladjal
  • Patent number: 11256956
    Abstract: Embodiments include systems and methods for keypoint detection in an image. In embodiments, a processor of a computing device may apply to an image a first neural network that has been trained to define and output a plurality of regions. The processor may apply to each of the plurality of regions a respective second neural network to that has been trained to output a plurality of keypoints in each of the plurality of regions. The processor may apply to the plurality of keypoints a third neural network that has been trained to determine a correction for each of the plurality of keypoints to provide corrected keypoints suitable for the execution of an image processing function.
    Type: Grant
    Filed: December 2, 2019
    Date of Patent: February 22, 2022
    Assignee: Qualcomm Incorporated
    Inventors: Upal Mahbub, Rakesh Nattoji Rajaram, Vasudev Bhaskaran
  • Patent number: 11252304
    Abstract: A skin color image gamut weight detecting method and a device thereof are provided. The method includes: receiving an image including first color components and second color components; obtaining a skin color region, a skin color category, and a first gamut; obtaining first color component values and first cardinal numbers according to the first color components; obtaining second color component values and a plurality of second cardinal numbers according to the second color components; obtaining a second gamut and a weight center according to the skin color category, the first cardinal numbers, the second cardinal numbers, the first color component values, and the second color component values; obtaining a first weight area and a second weight area according to the first gamut and the second gamut; and obtaining a skin color gamut weight map according to the weight center, the first weight area, and the second weight area.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: February 15, 2022
    Assignee: REALTEK SEMICONDUCTOR CORP.
    Inventors: Teng-Hsiang Yu, Hiroaki Endo
  • Patent number: 11250246
    Abstract: An expression recognition device includes processing circuitry to acquire an image; extract a face area of a person from the acquired image and obtaining a face image added with information of the face area; extract one or more face feature points on a basis of the face image; determine a face condition representing a state of a face in the face image depending on reliability of each of the extracted face feature points; determine a reference point for extraction of a feature amount used for expression recognition from among the extracted face feature points depending on the determined face condition; extract the feature amount on a basis of the determined reference point; recognize a facial expression of the person in the face image using the extracted feature amount; and output information related to a recognition result of the facial expression of the person in the face image.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: February 15, 2022
    Assignee: MITSUBISHI ELECTRIC CORPORATION
    Inventors: Shunya Osawa, Takahiro Otsuka
  • Patent number: 11250242
    Abstract: A user terminal according to an embodiment of the present invention includes a capturing device for capturing a face image of a user, and an eye tracking unit for, on the basis of a configured rule, acquiring, from the face image, a vector representing the direction that the face of the user is facing, and a pupil image of the user, and performing eye tracking of the user by inputting, in a configured deep learning model, the face image, the vector and the pupil image.
    Type: Grant
    Filed: April 19, 2018
    Date of Patent: February 15, 2022
    Assignee: VisualCamp Co., Ltd.
    Inventors: Yoon Chan Seok, Tae Hee Lee
  • Patent number: 11250292
    Abstract: Disclosed by the present disclosure are a method and apparatus for generating information. A specific embodiment of the method comprises: obtaining a first image and a second image; inputting the first image and the second image respectively into a pre-trained detection and recognition model, to obtain an annotated first image and an annotated second image, where an annotation comprises an image box surrounding a target object in the image, and the detection and recognition model is configured to represent the correspondence relationship between an image and an annotated image; and inputting the annotated first image and the annotated second image to a pre-trained matching model to obtain a matching degree between the annotated first image and the annotated second image, where the matching model is used to characterize a corresponding relationship between a pair of images and the matching degree between the images.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: February 15, 2022
    Assignees: Beijing Jingdong Shangke Information Technology Co., Ltd., Beijing Jingdong Century Trading Co., Ltd.
    Inventor: Lei Wang
  • Patent number: 11238296
    Abstract: The present disclosure discloses a sample acquisition method, a target detection model generation method, a target detection method, a computing device, and a computer readable medium. The sample acquisition method includes: adding a perturbation to a pre-marked sample original box in an original image to obtain a sample selection box, wherein an image framed by the sample original box contains a target; and extracting an image framed by the sample selection box as a sample. The technical solutions of the present disclosure can effectively increase the number of the samples that can be acquired in the original image, and adding a background to the samples can effectively improve the recognition accuracy of the trained target detection model.
    Type: Grant
    Filed: February 3, 2019
    Date of Patent: February 1, 2022
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventor: Xiaojun Tang
  • Patent number: 11225028
    Abstract: A design with multiple instances of a three-dimensional article is printed by first defining a unit cell that includes a single instance of a three-dimensional article that repeats in the design along with its nearest neighbor elements in both a plane of a build plate of a target printer on which the design is to be printed and a plane orthogonal thereto. The design is represented in an output file of a design application and a slicer application then generates instructions to manufacture the design by the target printer. The instructions (g-code) include directions to print, for each of a specified number of layers a number of instances of the unit cell that can be accommodated within a build envelop of the target printer per layer as determined by said slicer application. The design is printed by the printer according to the instructions from the slicer application.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: January 18, 2022
    Assignee: NEXA3D INC.
    Inventors: Izhar Medalsy, Itay Barel
  • Patent number: 11214196
    Abstract: A system, apparatus and method for enhancing a driver's field of view in a vehicle. Sensor data may be generated, via one or more first sensors, relating a pitch angle of the vehicle and received in a processing apparatus, which further determines if the pitch angle of the vehicle exceeds a configured pitch angle threshold for a configured period of time. One or more control signals may be transmitted to activate one or more cameras configured to capture image data at least in a front area of the vehicle if the processing apparatus determines that the pitch angle exceeds the configured pitch angle for the configured period of time. The captured image data may then be displayed on a display unit.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: January 4, 2022
    Assignee: Volkswagen Aktiengesellschaft
    Inventors: Najib Hadir, Jordan Pringle, Subramanian Swaminathan, Andre Guilherme Linarth
  • Patent number: 11205305
    Abstract: In one embodiment, a method includes presenting to a user, on a display of a head-worn client computing device, a three-dimensional video including images of a real-life scene that is remote from the user's physical environment. The method also includes presenting to the user, on the display of the head-worn client computing device, a graphical object including an image of the user's physical environment or a virtual graphical object.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: December 21, 2021
    Assignee: SAMSUNG ELECTRONICS COMPANY, LTD.
    Inventors: Sajid Sadi, Sergio Perdices-Gonzalez, Rahul Budhiraja, Brian Dongwoo Lee, Ayesha Mudassir Khwaja, Pranav Mistry, Link Huang, Cathy Kim, Michael Noh, Ranhee Chung, Sangwoo Han, Jason Yeh, Junyeon Cho, Soichan Nget, Brian Harms, Yedan Qian, Ruokan He
  • Patent number: 11200405
    Abstract: A three-dimensional (3D) image-based facial verification method and apparatus is provided. The facial verification method may include capturing a facial image of a 3D face of a user, determining an occluded region in the captured facial image by comparing the captured facial image and an average facial image, generating a synthetic image by synthesizing the captured facial image and the average facial image based on the occluded region, and verifying the user based on the synthetic image.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: December 14, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungju Han, Minsu Ko, Jaejoon Han, Chang Kyu Choi
  • Patent number: 11188227
    Abstract: An electronic device and a key input method using an external input device are provided. The electronic device includes a camera; a display; a communication interface; a memory; and a processor. The processor is configured to establish an electrical connection to an external keyboard; obtain an image of the external keyboard; set a key arrangement for the external keyboard based on the obtained image; and generate information corresponding to a key input signal received from the external keyboard based on the set key arrangement.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: November 30, 2021
    Inventors: Jiyoon Heo, Younghak Oh, Minjeong Moon, Minjung Moon, Myojin Bang, Seoyoung Yoon, Jaegi Han
  • Patent number: 11183012
    Abstract: A system including an image sensor that captures image data of a gaming table and a player area, and a tracking controller communicatively coupled to the image sensor. The tracking controller detects a player and a token set from the captured image data by applying an image neural network model to the image data to generate at least one key player data element for the player and at least one key token data element for the token set, generates a player data object representing physical characteristics of the player based on the key player data elements, links the player data object to a player identifier of the player, generates a token identifier based on the key token data elements, and links the token identifier to the player data object based on a physical relationship between the player and the token set indicated by the key data elements.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: November 23, 2021
    Assignee: SG Gaming, Inc.
    Inventors: Terrin Eager, Bryan Kelly, Martin S. Lyons
  • Patent number: 11176670
    Abstract: Provided is an apparatus for identifying pharmaceuticals, including an identifier configured to perform at least one of first identification of a medicine based on an image of the medicine in a pharmaceutical package or second identification of the medicine based on a spectrum of the medicine; a verifier configured to verify an error in dispensing the pharmaceutical package based on comparison of a result of the at least one of the first identification and the second identification with a prescription upon which the pharmaceutical package is prepared; and a controller configured to, based on a result of the first identification being unsuccessful, control to acquire the spectrum of the medicine and perform the second identification of the medicine.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: November 16, 2021
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: So Young Lee, Hyun Seok Moon
  • Patent number: 11170250
    Abstract: A nail contour detecting device including a processor, wherein the processor obtains first feature point data of a first nail contour which is a nail contour detected from a first nail image obtained by imaging a nail of a finger or a toe, and second feature point data of a second nail contour which is a nail contour detected from a second nail image obtained by imaging a nail of the same finger or toe as the first nail image; and the processor obtains one nail contour based on the first feature point data and the second feature point data.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: November 9, 2021
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Masaaki Sasaki
  • Patent number: 11170216
    Abstract: To make it possible to set a parameter, which is used for detection of a mark attached to a ground marker, according to the feature of the mark. Provided is an information processing apparatus including: an acquisition unit that acquires a captured image; a detection unit that detects a feature of a target object in the captured image; and a determination unit that determines, on the basis of the feature, a parameter used for an assessment of whether or not the target object is a predetermined object.
    Type: Grant
    Filed: November 13, 2017
    Date of Patent: November 9, 2021
    Assignee: SONY NETWORK COMMUNICATIONS INC.
    Inventor: Sho Murakoshi
  • Patent number: 11158055
    Abstract: The present disclosure relates to utilizing a neural network having a two-stream encoder architecture to accurately generate composite digital images that realistically portray a foreground object from one digital image against a scene from another digital image. For example, the disclosed systems can utilize a foreground encoder of the neural network to identify features from a foreground image and further utilize a background encoder to identify features from a background image. The disclosed systems can then utilize a decoder to fuse the features together and generate a composite digital image. The disclosed systems can train the neural network utilizing an easy-to-hard data augmentation scheme implemented via self-teaching. The disclosed systems can further incorporate the neural network within an end-to-end framework for automation of the image composition process.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: October 26, 2021
    Assignee: ADOBE INC.
    Inventors: Zhe Lin, Jianming Zhang, He Zhang, Federico Perazzi
  • Patent number: 11153519
    Abstract: A defective pixel is easily identified in a solid-state imaging element that detects an address event. An address event detecting unit detects, as an address event, a fact that an absolute value of a change amount of luminance exceeds a predetermined threshold value with regard to each of a plurality of pixels, and outputs a detection signal indicating a result of the detection. A detection frequency acquisition unit acquires a detection frequency of the address event with regard to each of the plurality of pixels. A defective pixel identification unit identifies, on the basis of a statistic of the detection frequency, a defective pixel where an abnormality has occurred among the plurality of pixels.
    Type: Grant
    Filed: February 7, 2019
    Date of Patent: October 19, 2021
    Assignee: Sony Semiconductor Solutions Corporation
    Inventor: Atsumi Niwa
  • Patent number: 11139958
    Abstract: In one embodiment, an apparatus comprises a communication interface and a processor. The communication interface is to communicate with a visual computing device over a network. The processor is to: access visual data captured by a camera; detect a particular feature in the visual data, wherein the particular feature comprises a visual indication of privacy-sensitive information; sanitize the visual data to mask the privacy-sensitive information associated with the particular feature, wherein sanitizing the visual data causes sanitized visual data to be produced; and transmit, via the communication interface, the sanitized visual data to the visual computing device over the network, wherein the visual computing device is to use the sanitized visual data to process a visual query associated with the visual data.
    Type: Grant
    Filed: September 27, 2018
    Date of Patent: October 5, 2021
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Shao-Wen Yang
  • Patent number: 11120573
    Abstract: A control method, suitable for head-mounted devices located in a physical environment, includes following operations. Images of the physical environment are captured over time by the head-mounted devices. Candidate objects and object features of the candidate objects are extracted from the images. Local determinations are generated about whether each of the candidate objects is fixed or not. The object features and the local determinations are shared between the head-mounted devices. An updated determination is generated about whether each of the candidate objects is fixed or not according to the local determinations shared between the head-mounted devices.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: September 14, 2021
    Assignee: HTC Corporation
    Inventors: Hsin-Hao Lee, Chia-Chu Ho, Ching-Hao Lee
  • Patent number: 11108947
    Abstract: A focus control apparatus includes a focus detection unit configured to detect a focus state, and a control unit configured to performing a focusing operation in accordance with the focus state and a setting value relating to the focusing operation. The control unit is configured to select a first state that allows user setting of the setting value and a second state that automatically sets the setting value according to the focus state. The control unit sets the setting value in a first setting range that is a range of the setting value that can be set by the user in the first state, and sets the setting value in a second setting range that is wider than the first setting range in the second state.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: August 31, 2021
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Yasuyuki Suzuki
  • Patent number: 11109005
    Abstract: A device, system and method for enhancing high-contrast and text regions in projected images is provided. A device applies, to an input image: a high-contrast sharpening filter sharpening high-contrast regions producing a high-contrast sharpened image; and a background sharpening filter sharpening other regions of the input producing a background sharpened image, the background sharpening filter applying less sharpening than the high-contrast sharpening filter.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: August 31, 2021
    Assignee: CHRISTIE DIGITAL SYSTEMS USA, INC.
    Inventors: Xiaodan Hu, Mohamed A. Naiel, Zohreh Azimifar, Ibrahim Ben Daya, Mark Lamm, Paul Fieguth
  • Patent number: 11084692
    Abstract: To accurately present information about the shape and location of a hoisting load and an object located near the hoisting load, regardless of the orientation and operational state of the crane. A guide information display device, wherein a camera images part of a work region of a crane, a laser scanner obtains a data point group of the work region captured by the camera, and a data processing unit: removes, from the obtained data point group, a data point group located between a hoisting load hanging from a crane and the tip end section of the telescopic boom of the crane; estimates, on the basis of the remaining data point group, the top surface of the hoisting load, the ground surface of the work region, and the top surfaces of objects located in the work region; generates guide frames which surround the top surfaces of the hoisting load and the objects; and overlaps the generated guide frames on the captured image, and displays the same on a data display unit.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: August 10, 2021
    Assignees: TADANO LTD., THE SCHOOL CORPORATION KANSAI UNIVERSITY
    Inventors: Takayuki Kosaka, Iwao Ishikawa, Satoshi Kubota, Shigenori Tanaka, Kenji Nakamura, Yuhei Yamamoto, Masaya Nakahara
  • Patent number: 11086001
    Abstract: The present application provides a position detecting method, device and storage medium for a vehicle ladar, where the method includes: detecting, through a ladar disposed on an autonomous vehicle, detection data of at least one wall of an interior room in which the autonomous vehicle is located, obtaining a point cloud image according to the detection data of the at least one wall, and judging, according to the point cloud image, whether an installation position of the ladar is accurate. According to the technical solution, it is possible to accurately detect whether the installation position of the ladar is accurate, provide a prerequisite for calibration of the installation position of the ladar, and improve detection accuracy of the ladar for obstacles around the autonomous vehicle.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: August 10, 2021
    Inventor: Nan Wu
  • Patent number: 11080514
    Abstract: A smart device having a photo processing system, and a related program product and method for processing photos. The photo processing system includes: a detector that detects when a photo is displayed on the smart device; an auto capture system that captures a viewer image from a front facing camera on the smart device in response to detecting that the photo is being displayed; a facial matching system that determines whether the viewer image matches any face images in the photo; and an auto zoom system that enlarges and displays a matched face image from the photo.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: August 3, 2021
    Assignee: Citrix Systems, Inc.
    Inventors: Nandikotkur Achyuth, Divyansh Deora, Arnav Akhoury
  • Patent number: 11080971
    Abstract: Methods, systems, apparatus, and articles of manufacture to generate corrected projection data for stores of a retailer are disclosed. An example apparatus to reduce projection errors associated with retail register devices includes a receipt data analyzer to retrieve transaction code values associated with receipts generated by the retail register devices, identify a direction change in a first subset of the retrieved transaction code values, and verify a register reset occurrence based on values associated with a second subset of the retrieved transaction code values, and a projection calculator to reduce retail sales projection error by calculating a transaction count based on the retrieved receipts and transaction code values.
    Type: Grant
    Filed: June 5, 2018
    Date of Patent: August 3, 2021
    Assignee: The Nielsen Company (US), LLC
    Inventors: Konstantin Korolev, Gáspár Tamás Péter
  • Patent number: 11062422
    Abstract: An image processing apparatus is configured to acquire an image that is a partial predetermined area of an image related to image data. The image processing apparatus includes processing circuitry configured to acquire a narrow-angle image that is a predetermined area of a wide-angle image, based on a structure of a building represented in the wide-angle image that is an entire region or a partial region of the image related to the image data.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: July 13, 2021
    Assignee: RICOH COMPANY, LTD.
    Inventor: Hirochika Fujiki
  • Patent number: 11062453
    Abstract: A method for scene parsing includes: performing a convolution operation on a to-be-parsed image by using a deep neural network to obtain a first feature map, the first feature map including features of at least one pixel in the image; performing a pooling operation on the first feature map to obtain at least one second feature map, a size of the second feature map being less than that of the first feature map; and performing scene parsing on the image according to the first feature map and the at least one second feature map to obtain a scene parsing result of the image, the scene parsing result including a category of the at least one pixel in the image. A system for scene parsing and a non-transitory computer-readable storage medium can facilitate realizing the method.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: July 13, 2021
    Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventors: Jianping Shi, Hengshuang Zhao
  • Patent number: 11062520
    Abstract: A wearable device is disclosed that may comprise: a display that permits a user to view a real-world (RW) environment; and a computer in communication with the display, the computer comprising one or more processors and memory storing instructions, executable by the one or more processors, the instructions comprising, to: using sensor data, determine a virtual surface model (VSM) associated with a real-world (RW) object in the RW environment; and provide, via the display, a three-dimensional (3D) digital human model (DHM) located within the RW environment, wherein the DHM and the VSM are restricted from occupying a common three-dimensional space.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: July 13, 2021
    Assignee: Ford Global Technologies, LLC
    Inventor: Martin Smets
  • Patent number: 11055786
    Abstract: A system segments a set of images of a property to identify a type of damage to the property. The system receives, from an image capturing device, a digital image of a roof or other feature of the property. The system processes the image to identify a set of segments, in which each segment corresponds to a piece of the feature, such as a tab or tooth of a shingle on the roof. The system saves a result of the processing to a data file as a segmented image of the property, and it uses the segmented image to identify a type of damage to the property.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: July 6, 2021
    Assignee: Conduent Business Services, LLC
    Inventors: Matthew Adam Shreve, Edgar A. Bernal, Richard L. Howe