Feature Extraction Patents (Class 382/190)
- Slice codes (Class 382/196)
- Directional codes and vectors (e.g., Freeman chains, compasslike codes) (Class 382/197)
- Pattern boundary and edge measurements (Class 382/199)
- Point features (e.g., spatial coordinate descriptors) (Class 382/201)
- Linear stroke analysis (e.g., limited to straight lines) (Class 382/202)
- Shape and form analysis (Class 382/203)
- Local neighborhood operations (e.g., 3x3 kernel, window, or matrix operator) (Class 382/205)
-
Patent number: 11682191Abstract: Example aspects of the present disclosure are directed to systems and methods for learning data augmentation strategies for improved object detection model performance. In particular, example aspects of the present disclosure are directed to iterative reinforcement learning approaches in which, at each of a plurality of iterations, a controller model selects a series of one or more augmentation operations to be applied to training images to generate augmented images. For example, the controller model can select the augmentation operations from a defined search space of available operations which can, for example, include operations that augment the training image without modification of the locations of a target object and corresponding bounding shape within the image and/or operations that do modify the locations of the target object and bounding shape within the training image.Type: GrantFiled: March 23, 2022Date of Patent: June 20, 2023Assignee: GOOGLE LLCInventors: Jon Shlens, Ekin Dogus Cubuk, Quoc Le, Tsung-Yi Lin, Barret Zoph, Golnaz Ghiasi
-
Patent number: 11669750Abstract: Techniques relating to managing “bad” or “imperfect” data being imported into a database system are described herein. A lifecycle technology solution helps receive data from a variety of different data sources of a variety of known and/or unknown formats, standardize it, fit it to a known taxonomy through model-assisted classification, store it to a database in a manner that is consistent with the taxonomy, and allow it to be queried for a variety of different usages. Auto-classification, enrichment, clustering model and model stacks, and/or other disclosed techniques, may be used in these and/or other regards.Type: GrantFiled: August 23, 2021Date of Patent: June 6, 2023Assignee: Xeeva, Inc.Inventors: Dilip Dubey, Dineshchandra Harikisan Rathi, Koushik Kumaraswamy
-
Patent number: 11670084Abstract: Aspects of the subject disclosure may include, for example, a device, that includes a processing system including a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations including receiving a manifest for a point cloud, wherein the point cloud is partitioned into a plurality of cells; determining an occlusion level for a cell of the plurality of cells with respect to a predicted viewport; reducing a point density for the cell provided in the manifest based on the occlusion level, thereby determining a reduced point density; and requesting delivery of points in the cell, based on the reduced point density. Other embodiments are disclosed.Type: GrantFiled: August 25, 2022Date of Patent: June 6, 2023Assignee: AT&T Intellectual Property I, L.P.Inventors: Bo Han, Cheuk Yiu Ip, Jackson Jarrell Pair
-
Patent number: 11670114Abstract: The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.Type: GrantFiled: October 20, 2020Date of Patent: June 6, 2023Assignee: Adobe Inc.Inventors: Jinoh Oh, Xin Lu, Gahye Park, Jen-Chan Jeff Chien, Yumin Jia
-
Patent number: 11663246Abstract: Systems, methods, and non-transitory computer readable media are configured to apply a spectral clustering technique to at least a portion of a similarity graph to generate clusters of geographic sub-regions constituting geographic regions. A tf-idf technique is performed to determine pages of a social networking system associated with a geographic region as potential local suggestions for a user associated with a geographic sub-region in the geographic region. References to at least a portion of the pages are presented as local suggestions to the user.Type: GrantFiled: December 12, 2016Date of Patent: May 30, 2023Assignee: Meta Platforms, Inc.Inventors: Apaorn Tanglertsampan, Jason Eric Brewer, Bradley Ray Green
-
Patent number: 11657601Abstract: A computer-implemented method of detecting logos in a graphical rendering may comprise detecting, using a first and a second trained object detector, logos in the graphical rendering and outputting a first and a second list of detections and filtering, using at least a first and a second prior performance-based filter, the received first and second lists of detections into a first group of kept detections, a second group of discarded detections and a third group of detections. Detections in the third group of detections may be clustered in at least one cluster comprising detections that are of a same class and that are generally co-located within the electronic image. A cluster score may then be assigned to each cluster. A set of detections of logos in the graphical rendering may then be output, the set comprising the detections in the first group and a detection from each of the clusters whose assigned cluster score is greater than a respective threshold.Type: GrantFiled: April 18, 2022Date of Patent: May 23, 2023Assignee: VADE USA, INCORPORATEDInventors: Mehdi Regina, Maxime Marc Meyer, Sébastien Goutal
-
Patent number: 11651249Abstract: Methods, apparatus, and processor-readable storage media for determining similarity between time series using machine learning techniques are provided herein. An example computer-implemented method includes obtaining a primary time series and a set of multiple candidate time series; calculating, using machine learning techniques, similarity measurements between the primary time series and each of the candidate time series; for each of the similarity measurements, assigning weights to the candidate time series based on similarity to the primary time series relative to the other candidate time series; generating, for each of the candidate time series, a similarity score based on the weights assigned to each of the candidate time series across the similarity measurements; and outputting, based on the similarity scores, identification of at least one candidate time series for use in one or more automated actions relating to at least one system.Type: GrantFiled: October 22, 2019Date of Patent: May 16, 2023Assignee: EMC IP Holding Company LLCInventors: Fatemeh Azmandian, Peter Beale, Bina K. Thakkar, Zachary W. Arnold
-
Patent number: 11650577Abstract: A plant operation data monitoring device comprises: an input section that receives operation data on a plant; and a calculator that includes databases storing the operation data received, and a computing section executing a program. The computing section stores the operation data received in a first database of the databases in time series. The computing section determines from peak values of the operation data stored whether gradients of the operation data are positive or negative, and then stores the gradients in a second database of the databases for positive gradients or in the second database of the databases for negative gradients in time series. The computing section determines threshold values for abnormality determination about the positive and negative gradients, divides the positive gradients and the negative gradients into normal values and abnormal values, and additionally stores the divided gradients in the second database for the positive or negative gradients.Type: GrantFiled: December 23, 2020Date of Patent: May 16, 2023Assignee: MITSUBISHI HEAVY INDUSTRIES, LTD.Inventors: Tamami Kurihara, Kengo Iwashige, Tadaaki Kakimoto, Tetsuji Morita
-
Patent number: 11644897Abstract: Described are various embodiments of a pupil tracking system and method, and digital display device and digital image rendering system and method using same. In one embodiment, a computer-implemented method for improving a perceptive experience of light field content projected via a light field display within a light field viewing zone comprises sequentially acquiring a user feature location, and comparing a velocity computed therefrom with a designated threshold velocity. Upon the velocity corresponding with a transition from a relatively dynamic to a relatively static state, a rendering geometry of the light field content is adjusted to project the light field content within an adjusted light field viewing zone in accordance with a newly acquired user feature location.Type: GrantFiled: June 2, 2022Date of Patent: May 9, 2023Assignee: EVOLUTION OPTIKS LIMITEDInventors: Khaled El-Monajjed, Guillaume Lussier, Faleh Mohammad Faleh Altal, Daniel Gotsch
-
Patent number: 11647158Abstract: A computing system, a method, and a computer-readable storage medium for adjusting eye gaze are described. The method includes capturing a video stream including images of a user, detecting the user's face region within the images, and detecting the user's facial feature regions within the images based on the detected face region. The method includes determining whether the user is completely disengaged from the computing system and, if the user is not completely disengaged, detecting the user's eye region within the images based on the detected facial feature regions. The method also includes computing the user's desired eye gaze direction based on the detected eye region, generating gaze-adjusted images based on the desired eye gaze direction, wherein the gaze-adjusted images include a saccadic eye movement, a micro-saccadic eye movement, and/or a vergence eye movement, and replacing the images within the video stream with the gaze-adjusted images.Type: GrantFiled: October 30, 2020Date of Patent: May 9, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Steven N. Bathiche, Eric Sommerlade, Alexandros Neofytou, Panos C. Panay
-
Patent number: 11640718Abstract: A capturing device has a lens block that includes a lens for focusing light from a subject during a daytime, the subject includes a vehicle. The capturing device further includes an image sensor that captures an image based on light from the subject focused by the lens, and a processor that generates a face image of an occupant riding in the vehicle based on a first number of captured imaged of the subject which are captured by the image sensor at different times. The processor generates the face image of the occupant further based on a second number of captured images in which a luminance value of a region of interest is equal to or smaller than a threshold value among the first number of captured images of the subject.Type: GrantFiled: August 12, 2021Date of Patent: May 2, 2023Assignee: I-PRO CO., LTD.Inventors: Yuma Kobayashi, Tetsuo Tanaka, Toshiaki Ito, Shinichi Murakami, Michinori Kishimoto, Hiroshi Mitani, Jyouji Wada
-
Patent number: 11640705Abstract: The technology disclosed extends Human-in-the-loop (HITL) active learning to incorporate real-time human feedback to influence future sampling priority for choosing the best instances to annotate for accelerated convergence to model optima. The technology disclosed enables the user to communicate with the model that generates machine annotations for unannotated instances. The technology disclosed also enables the user to communicate with the sampling logic that selects instances to be annotated next. The technology disclosed enables the user to generate ground truth annotations, from scratch or by correcting erroneous model annotations, which guide future model predictions to more accurate results.Type: GrantFiled: December 13, 2022Date of Patent: May 2, 2023Assignee: LODESTAR SOFTWARE INC.Inventor: Evan Acharya
-
Patent number: 11636385Abstract: An example system includes a processor to receive raw and unlabeled videos. The processor is to extract speech from the raw and unlabeled videos. The processor is to extract positive frames and negative frames from the raw and unlabeled videos based on the extracted speech for each object to be detected. The processor is to extract region proposals from the positive frames and negative frames. The processor is to extract features based on the extracted region proposals. The processor is to cluster the region proposals and assign a potential score to each cluster. The processor is to train a binary object detector to detect objects based on positive samples randomly selected based on the potential score.Type: GrantFiled: November 4, 2019Date of Patent: April 25, 2023Assignee: International Business Machines CorporationInventors: Elad Amrani, Udi Barzelay, Rami Ben-Ari, Tal Hakim
-
Patent number: 11636085Abstract: A method, computer program product, and computer system for detection and utilization of similarities among tables in multiple data systems that include a first data system and a second data system. A semantic dataset is generated. A first measure (X) of similarity between semantic data in columns of the first and second data systems is computed using the semantic dataset. A first, different measure (Y) of similarity between semantic data in columns of the first and second data systems is computed using the semantic dataset. A third measure (Z) similarity between columns of the first and second data systems is computed based on data in cells within the columns. A weighted combination (U) of X, Y, and Z between the columns of tables in the first and second data systems is computed. X, Y, Z, U or combinations thereof are used to improve a computer system.Type: GrantFiled: September 1, 2021Date of Patent: April 25, 2023Assignee: International Business Machines CorporationInventors: Xiao Chao Yan, Bei Bei Zhan, Wu Cheng, Yujia Wang
-
Patent number: 11637958Abstract: A control apparatus includes an acquisition unit configured to acquire information on a rotation of the first imaging unit in the rotation direction and position information of a designated range designated on a first image imaged by the first imaging unit, a setting unit configured to set the second imaging range so as to include the designated range based on the information on the rotation and the position information, and a control unit configured to control the second imaging unit to instruct the second imaging unit to image the second imaging range and to acquire a second image by changing at least one of the imaging direction and the angle of view of the second imaging unit.Type: GrantFiled: June 22, 2021Date of Patent: April 25, 2023Assignee: CANON KABUSHIKI KAISHAInventor: Yutaka Suzuki
-
Patent number: 11631163Abstract: A method includes processing, using at least one processor of an electronic device, each of multiple images using a photometric augmentation engine, where the photometric augmentation engine performs one or more photometric augmentation operations. The method also includes applying, using the at least one processor, multiple layers of a convolutional neural network to each of the images, where each layer generates a corresponding feature map. The method further includes processing, using the at least one processor, at least one of the feature maps using at least one feature augmentation engine between consecutive layers of the multiple layers, where the at least one feature augmentation engine performs one or more feature augmentation operations.Type: GrantFiled: July 14, 2020Date of Patent: April 18, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo
-
Patent number: 11625561Abstract: A method for determining a class to which an input image belongs in an inference process, includes: storing a frequent feature for each class in a frequent feature database; inputting an image as the input image; extracting which class the input image belongs to; extracting a plurality of features that appear in the inference process; extracting one of the features that satisfies a predetermined condition as a representative feature; reading out the frequent feature corresponding to an extracted class from the frequent feature database; extracting one or a plurality of ground features based on the frequent feature and the representative feature; storing a concept data representing each feature in an annotation database; reading out the concept data corresponding to the one or the plurality of ground features from the annotation database; generating explanation information; and outputting the ground feature and the explanatory information together with the extracted class.Type: GrantFiled: April 28, 2020Date of Patent: April 11, 2023Assignee: DENSO CORPORATIONInventors: Hiroshi Kuwajima, Masayuki Tanaka
-
Patent number: 11620730Abstract: A method for combining multiple images is disclosed herein. A target mapping matrix is determined based on a first image and a second image. The target mapping matrix is associated with a target correspondence between the first image and the second image. The first image and the second image are combined into a combined image based on the first target mapping matrix. The combined image is output by implementing the disclosed method.Type: GrantFiled: March 23, 2021Date of Patent: April 4, 2023Assignee: Realsee (Beijing) Technology Co., Ltd.Inventors: Tong Rao, Cihui Pan, Mingyuan Wang, Xiaodong Gu, Chenglin Liu
-
Patent number: 11620567Abstract: The invention provides a method, apparatus, device and storage medium for predicting a protein binding site. The method comprises the steps of: receiving a protein sequence to be predicted, dividing the protein sequence by using a preset sliding window and sliding step to obtain a plurality of amino acid sub-sequences, building word vectors for the protein sequence according to the amino acid sub-sequences, extracting document features from word elements, building document feature vectors for the protein sequence according to the extracted document features, extracting protein chain biological features from the amino acid sub-sequences, building biological feature vectors for the protein sequence according to the extracted biological features, classifying the amino acid sub-sequences expressed with the document feature vectors and the biological feature vectors by using a preset amino acid residue classification model to obtain amino acid residue types for the protein sequence.Type: GrantFiled: January 24, 2019Date of Patent: April 4, 2023Assignees: SHENZHEN UNIVERSITY, HARBIN INSTITUTE OF TECHNOLOGY SHENZHEN GRADUATE SCHOOLInventors: Yong Zhang, Wei He, Yong Xu, Dongning Zhao
-
Patent number: 11620327Abstract: A system and method for generating an interface for providing recommendations based on contextual insights, the method including: generating at least one signature for at least one multimedia content element identified within an interaction between a plurality of users; generating at least one contextual insight based on the generated at least one signature and user interests of the plurality of users, wherein each contextual insight indicates a current user preference; searching for at least one content item that matches the at least one contextual insight; and generating an interface for providing the at least one content item within the interaction between the plurality of users.Type: GrantFiled: November 22, 2017Date of Patent: April 4, 2023Assignee: CORTICA LTDInventors: Igal Raichelgauz, Karina Odinaev, Yehoshua Y Zeevi
-
Patent number: 11615262Abstract: Disclosed examples include image processing methods and systems to process image data, including computing a plurality of scaled images according to input image data for a current image frame, computing feature vectors for locations of the individual scaled images, classifying the feature vectors to determine sets of detection windows, and grouping detection windows to identify objects in the current frame, where the grouping includes determining first clusters of the detection windows using non-maxima suppression grouping processing, determining positions and scores of second clusters using mean shift clustering according to the first clusters, and determining final clusters representing identified objects in the current image frame using non-maxima suppression grouping of the second clusters.Type: GrantFiled: March 31, 2020Date of Patent: March 28, 2023Assignee: Texas Instmments IncorporatedInventors: Manu Mathew, Soyeb Noormohammed Nagori, Shyam Jagannathan
-
Patent number: 11615642Abstract: There is provided a recognition system adaptable to a portable device or a wearable device. The recognition system senses a body heat using a thermal sensor, and performs functions such as the living body recognition, image denoising and body temperature prompting according to detected results.Type: GrantFiled: September 9, 2021Date of Patent: March 28, 2023Assignee: PIXART IMAGING INC.Inventors: Nien-Tse Chen, Yi-Hsien Ko, Yen-Min Chang
-
Patent number: 11601566Abstract: An image reading device includes: an area sensor in which color filters of three colors of R, G, and B are arranged in a Bayer array and a light receiving amount is detected by a light receiving element for each color filter; and a hardware processor that: reads a document by using the light receiving elements in a first group in the area sensor, reads the document by using the light receiving elements in a second group in the area sensor, at a region shifted by ½ pixels in a sub-scanning direction from a reading region of the light receiving elements in the first group, and interpolates R-color read data and B-color read data using G-color read data and synthesizes image data having a resolution twice the resolution of the area sensor.Type: GrantFiled: March 1, 2022Date of Patent: March 7, 2023Assignee: KONICA MINOLTA, INC.Inventor: Takehisa Nakao
-
Patent number: 11593663Abstract: A model generation method includes updating, by at least one processor, a weight matrix of a first neural network model at least based on a first inference result obtained by inputting, to the first neural network model which discriminates between first data and second data generated by using a second neural network model, the first data, a second inference result obtained by inputting the second data to the first neural network model, and a singular value based on the weight matrix of the first neural network model. The model generation method also includes at least based on the second inference result, updating a parameter of the second neural network model.Type: GrantFiled: December 23, 2019Date of Patent: February 28, 2023Assignee: PREFERRED NETWORKS, INC.Inventor: Takeru Miyato
-
Patent number: 11593412Abstract: Various embodiments are provided for implementing an approximation nearest neighbour (ANN) search in a computing environment are provided. An approximation nearest neighbour (ANN) of a plurality of feature vectors in hyper-planes with dynamically variable subspaces by searching an inverted index may be retrieved.Type: GrantFiled: July 22, 2019Date of Patent: February 28, 2023Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Debasis Ganguly, Léa Deleris
-
Patent number: 11580326Abstract: A method is for matching a set of first classes assigned to a first data set with a set of second classes assigned to a second data set. The method includes constructing, via a set of pre-processing functions, a plurality of alignment profiles such that at least one alignment profile is assigned to each of the first classes and each of the second classes. The method includes generating a comparison matrix for each group of the alignment profiles, such that each group includes at least one of the first classes and at least one of the second classes. The method includes training a first machine learning model, through supervised training, based on the generated comparison matrices and based on probabilistic labels generated by a second machine learning model.Type: GrantFiled: September 7, 2020Date of Patent: February 14, 2023Assignee: NEC CORPORATIONInventors: Bin Cheng, Jonathan Fuerst, Mauricio Fadel Argerich, Masahiro Hayakawa, Atsushi Kitazawa
-
Patent number: 11580389Abstract: A dynamic graph includes a plurality of nodes and edges at a plurality of time steps; each node corresponds to a geographic location in a first area where pest infestation information is available for a subset of locations. Each edge connects two of the nodes which are geographically proximate, has a direction based on wind direction, and has a weight based on relative wind speed. Assign node features based on weather data as well as labels corresponding to pest infestation severity. Train a graph convolutional network on the dynamic graph. Based on predicted future weather conditions for a second area different than the first area, use the trained graph convolutional network to predict, via inductive learning, pest infestation severity for future times for a new set of nodes corresponding to new geographic locations in the second area for which no pest infestation information is available.Type: GrantFiled: January 14, 2020Date of Patent: February 14, 2023Assignee: International Business Machines CorporationInventors: Sambaran Bandyopadhyay, Sachin Gupta
-
Patent number: 11580741Abstract: Disclosed are a method and an apparatus for detecting abnormal objects in a video. The method for detecting abnormal objects in a video reconstructs a restored batch by applying each input batch to which an inpainting pattern is applied to a trained auto-encoder model, and fuses a time domain reconstruction error using time domain restored frames output by extracting and restoring a time domain feature point by applying a spatial domain reconstruction error and a plurality of successive frames using a restored frame output by combining the reconstructed restoring batch to a trained LSTM auto-encoder model to estimate an area where an abnormal object is positioned.Type: GrantFiled: December 24, 2020Date of Patent: February 14, 2023Assignee: INDUSTRY ACADEMY COOPERATION FOUNDATION OF SEJONG UNIVERSITYInventors: Yong Guk Kim, Long Thinh Nguyen
-
Patent number: 11580653Abstract: A method for ascertaining a depth information image for an input image. The input image is processed using a convolutional neural network, which includes multiple layers that sequentially process the input image, and each converts an input feature map into an output feature map. At least one of the layers is a depth map layer, the depth information image being ascertained as a function of a depth map layer. In the depth map layer, an input feature map of the depth map layer is convoluted with multiple scaling filters to obtain respective scaling maps, the multiple scaling maps are compared pixel by pixel to generate a respective output feature map in which each pixel corresponds to a corresponding pixel from a selected one of the scaling maps.Type: GrantFiled: April 10, 2019Date of Patent: February 14, 2023Assignee: Robert Bosch GmbHInventor: Konrad Groh
-
Patent number: 11568639Abstract: Disclosed systems and methods relate to remote sensing, deep learning, and object detection. Some embodiments relate to machine learning for object detection, which includes, for example, identifying a class of pixel in a target image and generating a label image based on a parameter set. Other embodiments relate to machine learning for geometry extraction, which includes, for example, determining heights of one or more regions in a target image and determining a geometric object property in a target image. Yet other embodiments relate to machine learning for alignment, which includes, for example, aligning images via direct or indirect estimation of transformation parameters.Type: GrantFiled: September 15, 2021Date of Patent: January 31, 2023Assignee: Cape Analytics, Inc.Inventors: Ryan Kottenstette, Peter Lorenzen, Suat Gedikli
-
Patent number: 11550289Abstract: An object of the present invention is to provide a tool system allowing a tool to be controlled on a work object basis before the work is started. A tool system includes a portable tool and an identification unit. The tool includes a driving unit to operate with power supplied from a battery pack. The identification unit identifies, by a contactless method, a current work object, to which the tool is set in place, out of a plurality of work objects.Type: GrantFiled: November 30, 2017Date of Patent: January 10, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.Inventors: Kazuo Dobashi, Yukio Okada
-
Patent number: 11551059Abstract: A modulated segmentation system can use a modulator network to emphasize spatial prior data of an object to track the object across multiple images. The modulated segmentation system can use a segmentation network that receives spatial prior data as intermediate data that improves segmentation accuracy. The segmentation network can further receive visual guide information from a visual guide network to increase tracking accuracy via segmentation.Type: GrantFiled: November 15, 2018Date of Patent: January 10, 2023Assignee: Snap Inc.Inventors: Linjie Yang, Jianchao Yang, Xuehan Xiong, Yanran Wang
-
Patent number: 11532095Abstract: There is provided with an information processing apparatus. An acquisition unit acquires a plurality of pattern discrimination results each indicating a location of a pattern that is present in an image. A selection unit selects a predetermined number of pattern discrimination results from the plurality of pattern discrimination results. A determination unit determines whether or not the selected predetermined number of pattern discrimination results are to be merged, based on a similarity of the locations indicated by the predetermined number of pattern discrimination results. A merging unit merges the predetermined number of pattern discrimination results for which it was determined by the determination unit that merging is to be performed. A control unit controls the selection unit, the determination unit, and the merging unit to repeatedly perform respective processes.Type: GrantFiled: May 26, 2020Date of Patent: December 20, 2022Assignee: Canon Kabushiki KaishaInventor: Tsewei Chen
-
Patent number: 11527242Abstract: A lip-language identification method and an apparatus thereof, an augmented reality device and a storage medium. The lip-language identification method includes: acquiring a sequence of face images for an object to be identified; performing lip-language identification based on a sequence of face images so as to determine semantic information of speech content of the object to be identified corresponding to lip actions in a face image; and outputting the semantic information.Type: GrantFiled: April 24, 2019Date of Patent: December 13, 2022Assignee: Beijing BOE Technology Development Co., Ltd.Inventors: Naifu Wu, Xitong Ma, Lixin Kou, Sha Feng
-
Patent number: 11499773Abstract: A refrigerator according to the present invention comprises: a storage chamber for storing articles; a camera for photographing the inner space of the storage chamber; a control part which visually recognizes a first article image captured by the camera so as to acquire article information corresponding to the first article image; a memory for storing the acquired article information so as to generate an article image history; and a display electrically connected to the control part. Further, the control part may: acquire, through the camera, a second article image in which an article is partially hidden by any other article; detect, from the second article image, a partial article image of the article partially hidden by the other article; and identify an article matching the partial article image on the basis of the article image history.Type: GrantFiled: March 29, 2019Date of Patent: November 15, 2022Assignee: LG ELECTRONICS INC.Inventor: Jichan Maeng
-
Patent number: 11501413Abstract: Embodiments are disclosed for generating lens blur effects. The disclosed systems and methods comprise receiving a request to apply a lens blur effect to an image, the request identifying an input image and a first disparity map, generating a plurality of disparity maps and a plurality of distance maps based on the first disparity map, splatting influences of pixels of the input image using a plurality of reshaped kernel gradients, gathering aggregations of the splatted influences, and determining a lens blur for a first pixel of the input image in an output image based on the gathered aggregations of the splatted influences.Type: GrantFiled: November 17, 2020Date of Patent: November 15, 2022Assignee: Adobe Inc.Inventors: Haiting Lin, Yumin Jia, Jen-Chan Chien
-
Patent number: 11495125Abstract: A system comprises a computer including a processor, and a memory. The memory stores instructions such that the processor is programmed to determine two or more clusters of vehicle operating parameter values from each of a plurality of vehicles at a location within a time. Determining the two or more clusters includes clustering data from the plurality of vehicles based on proximity to two or more respective means. The processor is further programmed to determine a reportable condition when a mean for a cluster representing a greatest number of vehicles varies from a baseline by more than a threshold.Type: GrantFiled: March 1, 2019Date of Patent: November 8, 2022Assignee: Ford Global Technologies, LLCInventors: Linjun Zhang, Juan Enrique Castorena Martinez, Codrin Cionca, Mostafa Parchami
-
Patent number: 11495231Abstract: A lip language recognition method, applied to a mobile terminal having a sound mode and a silent mode, includes: training a deep neural network in the sound mode; collecting a user's lip images in the silent mode; and identifying content corresponding to the user's lip images with the deep neural network trained in the sound mode. The method further includes: switching from the sound mode to the silent mode when a privacy need of the user arises.Type: GrantFiled: November 26, 2018Date of Patent: November 8, 2022Assignee: BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Lihua Geng, Xitong Ma, Zhiguo Zhang
-
Patent number: 11496333Abstract: Presented herein is an audio reaction system and method for virtual/online meeting platforms where a participant provides a reaction (applause, laughter, wow, etc.) to something the presenter said or did. The trigger is a reaction feature in which participants press an emoticon button in a user interface or active some other user interface function to initiate message indicating a reaction.Type: GrantFiled: September 24, 2021Date of Patent: November 8, 2022Assignee: CISCO TECHNOLOGY, INC.Inventor: Tore Bjølseth
-
Patent number: 11494590Abstract: An apparatus comprising memory configured to store data to be machine-recognized (710), and at least one processing core configured to run an adaptive boosting machine learning algorithm with the data, wherein a plurality of learning algorithms are applied, wherein a feature space is partitioned into bins, wherein a distortion function is applied to features of the feature space (720), and wherein a first derivative of the distortion function is not constant (730).Type: GrantFiled: February 2, 2016Date of Patent: November 8, 2022Assignee: Nokia Technologies OYInventor: Chubo Shang
-
Patent number: 11488352Abstract: Various implementations disclosed herein include devices, systems, and methods for modeling a geographical space for a computer-generated reality (CGR) experience. In some implementations, a method is performed by a device including a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, the method includes obtaining a set of images. In some implementations, the method includes providing the set of images to an image classifier that determines whether the set of images correspond to a geographical space. In some implementations, the method includes establishing correspondences between at least a subset of the set of images in response to the image classifier determining that the subset of images correspond to the geographical space. In some implementations, the method includes synthesizing a model of the geographical space based on the correspondences between the subset of images.Type: GrantFiled: January 20, 2020Date of Patent: November 1, 2022Assignee: APPLE INC.Inventor: Daniel Kurz
-
Patent number: 11481975Abstract: An image processing method and apparatus, and a computer-readable storage medium are provided. The method includes: determining a first region matching a target object in a first image; determining a deformation parameter based on a preset deformation effect, the deformation parameter being used for determining a position deviation, generated based on the preset deformation effect, of each pixel point of the target object; and performing deformation processing on the target object in the first image based on the deformation parameter to obtain a second image.Type: GrantFiled: October 19, 2020Date of Patent: October 25, 2022Assignee: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD.Inventors: Yuanzhen Hao, Mingyang Huang, Jianping Shi
-
Patent number: 11475684Abstract: An image may be evaluated by a computer vision system to determine whether it is fit for analysis. The computer vision system may generate an embedding of the image. An embedding quality score (EQS) of the image may be determined based on the image's embedding and a reference embedding associated with a cluster of reference noisy images. The quality of the image may be evaluated based on the EQS of the image to determine whether the quality meets filter criteria. The image may be further processed when the quality is sufficient, or otherwise the image may be removed.Type: GrantFiled: March 25, 2020Date of Patent: October 18, 2022Assignee: Amazon Technologies, Inc.Inventors: Siqi Deng, Yuanjun Xiong, Wei Li, Shuo Yang, Wei Xia, Meng Wang
-
Patent number: 11475537Abstract: A processor-implemented image normalization method includes extracting a first object patch from a first input image and extracting a second object patch from a second input image based on an object area that includes an object detected from any one or any combination of the first input image and the second input image, determining, based on a first landmark detected from the first object patch, a second landmark of the second object patch; and normalizing the first object patch and the second object patch based on the first landmark and the second landmark.Type: GrantFiled: March 15, 2021Date of Patent: October 18, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Seungju Han, Minsu Ko, Changyong Son, Jaejoon Han
-
Patent number: 11475240Abstract: Embodiments relate to generating keypoint descriptors of the keypoints. An apparatus includes a pyramid image generator circuit and a keypoint descriptor generator circuit. The pyramid image generator circuit generates an image pyramid from an input image. The keypoint descriptor generator circuit determines intensity values of sample points in the pyramid images for a keypoint and determines comparison results of comparisons between the intensity values of pairs of the sample points. The keypoint descriptor generator circuit generate bit values defining the comparison results for the keypoint, each bit value corresponding with one of the comparison results, and generate a sequence of the bit values defining an ordering of the comparison results based on importance levels of the comparisons, where the importance level of each comparison defines how much the comparison is representative of features. Bit values for comparisons having the lowest importance levels may be excluded from the sequence.Type: GrantFiled: March 19, 2021Date of Patent: October 18, 2022Assignee: Apple Inc.Inventors: Liran Fishel, Assaf Metuki, Chuhan Min, Wai Yu Trevor Tsang
-
Patent number: 11475711Abstract: A non-transitory computer-readable recording medium stores therein a judgment program that causes a computer to execute a process including acquiring a captured image including a face to which a plurality of markers are attached at a plurality of positions that are associated with a plurality of action units, specifying each of the positions of the plurality of markers included in the captured image, judging an occurrence intensity of a first action unit associated with a first marker from among the plurality of action units based on a judgment criterion of an action unit and a position of the first marker from among the plurality of markers, and outputting the occurrence intensity of the first action unit by associating the occurrence intensity with the captured image.Type: GrantFiled: December 14, 2020Date of Patent: October 18, 2022Assignee: Fujitsu LimitedInventors: Akiyoshi Uchida, Junya Saito, Akihito Yoshii
-
Patent number: 11462002Abstract: Disclosed are a wallpaper management method and apparatus, a mobile terminal, and a storage medium. The method includes: determining a wallpaper to be switched; obtaining feature information of the wallpaper to be switched, and comparing the feature information of the wallpaper to be switched with the feature information of wallpapers in a feature database, to determine, in the feature database, a wallpaper matching the wallpaper to be switched; and performing wallpaper switching according to feature information corresponding to the matching wallpaper.Type: GrantFiled: July 25, 2019Date of Patent: October 4, 2022Assignee: ZTE CorporationInventor: Lan Luan
-
Patent number: 11462112Abstract: A method is provided in an Advanced Driver-Assistance System (ADAS). The method extracts, from an input video stream including a plurality of images using a multi-task Convolutional Neural Network (CNN), shared features across different perception tasks. The perception tasks include object detection and other perception tasks. The method concurrently solves, using the multi-task CNN, the different perception tasks in a single pass by concurrently processing corresponding ones of the shared features by respective different branches of the multi-task CNN to provide a plurality of different perception task outputs. Each respective different branch corresponds to a respective one of the different perception tasks. The method forms a parametric representation of a driving scene as at least one top-view map responsive to the plurality of different perception task outputs.Type: GrantFiled: February 11, 2020Date of Patent: October 4, 2022Inventors: Quoc-Huy Tran, Samuel Schulter, Paul Vernaza, Buyu Liu, Pan Ji, Yi-Hsuan Tsai, Manmohan Chandraker
-
Patent number: 11457200Abstract: Aspects of the subject disclosure may include, for example, a device, that includes a processing system including a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations including receiving a manifest for a point cloud, wherein the point cloud is partitioned into a plurality of cells; determining an occlusion level for a cell of the plurality of cells with respect to a predicted viewport; reducing a point density for the cell provided in the manifest based on the occlusion level, thereby determining a reduced point density; and requesting delivery of points in the cell, based on the reduced point density. Other embodiments are disclosed.Type: GrantFiled: March 20, 2020Date of Patent: September 27, 2022Assignee: AT&T Intellectual Property I, L.P.Inventors: Bo Han, Cheuk Yiu Ip, Jackson Jarrell Pair
-
Patent number: 11451718Abstract: Alternating Current (AC) light sources can cause images captured using a rolling shutter to include alternating darker and brighter regions—known as flicker bands—due to some sensor rows being exposed to different intensities of light than others. Flicker bands may be compensated for by extracting them from images that are captured using exposures that at least partially overlap in time. Due to the overlap, the images may be subtracted from each other so that scene content substantially cancels out, leaving behind flicker bands. The images may be for a same frame captured by at least one sensor, such as different exposures for a frame. For example, the images used to extract flicker bands may be captured using different exposure times that share a common start time, such as using a multi-exposure sensor where light values are read out at different times during light integration.Type: GrantFiled: March 12, 2021Date of Patent: September 20, 2022Assignee: NVIDIA CorporationInventor: Hugh Phu Nguyen