Patents Examined by Ming Y Hon
-
Patent number: 11861895Abstract: A system for detecting and tracking an object that is exhibiting an anomalous behavior from a helicopter is disclosed. The system includes a search light connected to the helicopter; a camera; and a processor including an object detection module coupled to the search light and the camera, the object detection module being configured to receive a plurality of images from the camera, compare the plurality of images against a pattern database, determine the object is exhibiting the anomalous behavior and instruct the search light to point toward the object.Type: GrantFiled: December 27, 2020Date of Patent: January 2, 2024Assignee: GOODRICH CORPORATIONInventors: Nitin Kumar Goyal, Srinivas Sanga
-
Patent number: 11842571Abstract: The present disclosure provides for using multiple inertial measurement units (IMUs) to recognize particular user activity, such as particular types of exercises and repetitions of such exercises. The IMUs may be located in consumer products, such as smartwatches and earbuds. Each IMU may include an accelerometer and a gyroscope, each with three axes of measurement, for a total of 12 raw measurement streams. A training image includes a plurality of subplots or tiles, each depicting a separate data stream. The training image is then used to train a machine learning model to recognize IMU data as corresponding to a particular type of exercise.Type: GrantFiled: July 29, 2020Date of Patent: December 12, 2023Assignee: Google LLCInventors: Mark Fralick, Brian Chen
-
Patent number: 11837025Abstract: Broadly speaking, the present techniques relate to a method and apparatus for performing action recognition, and in particular to a computer-implemented method for performing action recognition on resource-constrained or lightweight devices such as smartphones. The ML model may be adjusted to achieve required accuracy and efficiency levels, while also taking into account the computational capability of the apparatus that is being used to implement the ML model. One way is to adjust the number of channels assigned to the first set of channels, i.e. the full temporal resolution channels. Another way is to adjust the point in the ML model where the temporal pooling layer or layers are applied.Type: GrantFiled: February 16, 2021Date of Patent: December 5, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Brais Martinez, Tao Xiang, Victor Augusto Escorcia, Juan Perez-Rua, Xiatian Zhu, Antoine Toisoul
-
Patent number: 11836961Abstract: An information processing apparatus includes a map creation unit configured to create a defocus map corresponding to a captured image of a subject, an object setting unit configured to set a recognition target, and a determination unit configured to determine, based on the defocus map, whether the recognition target is recognizable in the image.Type: GrantFiled: March 12, 2021Date of Patent: December 5, 2023Assignee: Canon Kabushiki KaishaInventors: Shoichi Hoshino, Atsushi Nogami, Yusuke Mitarai
-
Patent number: 11823457Abstract: An image recognition method may include: acquiring a target image, where the target image may include a weld bead region; performing initial segmentation on the target image, to obtain a first recognition result, where the first recognition result may include first recognition information for the weld bead region in the target image; performing feature extraction on the target image, to obtain a region representation; obtaining a context representation based on the first recognition result and the region representation, where the context representation may be used for representing a correlation between each pixel and remaining pixels in the target image; and obtaining a second recognition result based on the context representation, where the second recognition result may include second recognition information for the weld bead region in the target image.Type: GrantFiled: April 6, 2023Date of Patent: November 21, 2023Assignee: CONTEMPORARY AMPEREX TECHNOLOGY CO., LIMITEDInventors: Guannan Jiang, Qiangwei Huang, Annan Shu
-
Patent number: 11816881Abstract: Disclosed are multiple object detection method and apparatus. The multiple object detection apparatus includes a feature map extraction unit for extracting a plurality of multi-scale feature maps based on an input image, and a feature map fusion unit for generating a multi-scale fusion feature map including context information by fusing adjacent multi-scale feature maps among the plurality of multi-scale feature maps generated by the feature map extraction unit.Type: GrantFiled: July 12, 2022Date of Patent: November 14, 2023Assignee: CHUNG ANG UNIVERSITY INDUSTRY ACADEMIC COOPERATIONInventors: Joon Ki Paik, Sang Woo Park, Dong Geun Kim, Dong Goo Kang
-
Patent number: 11816710Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for converting unstructured documents to structured key-value pairs. In one aspect, a method includes: providing an image of a document to a detection model, wherein: the detection model is configured to process the image to generate an output that defines one or more bounding boxes generated for the image; and each bounding box generated for the image is predicted to enclose a key-value pair including key textual data and value textual data, wherein the key textual data defines a label that characterizes the value textual data; and for each of the one or more bounding boxes generated for the image: identifying textual data enclosed by the bounding box using an optical character recognition technique; and determining whether the textual data enclosed by the bounding box defines a key-value pair.Type: GrantFiled: March 1, 2022Date of Patent: November 14, 2023Assignee: Google LLCInventors: Yang Xu, Jiang Wang, Shengyang Dai
-
Patent number: 11810366Abstract: Disclosed are a joint modeling method and apparatus for enhancing local features of pedestrians. The method includes the following steps: S1: acquiring an original surveillance video image data set, dividing the original surveillance video image data set into a training set and a test set in proportion; S2: cutting the surveillance video image training set to obtain image block vector sequences. In the present disclosure, local features of pedestrians in video images are extracted by a multi-head attention neural network, weight parameters of image channels are learned by channel convolution kernels, spatial features on the images are scanned through spatial convolution, local features of pedestrians are enhanced to improve the recognition rate of pedestrians, a feed-forward neural network and an activation function are adopted, so as to realize pedestrian re-recognition, thereby obtaining face images available.Type: GrantFiled: November 30, 2022Date of Patent: November 7, 2023Assignee: ZHEJIANG LABInventors: Hongsheng Wang, Guang Chen
-
Patent number: 11804039Abstract: In accordance with some embodiments, systems, apparatus, interfaces, methods, and articles of manufacture are provided for providing information about objects, such as background information and task information, and for providing alerts related to objects. In various embodiments, data is captured about an object and about a user via a camera. Based on the data, information about the object may be provided to the user.Type: GrantFiled: May 28, 2021Date of Patent: October 31, 2023Assignee: SCIENCE HOUSE LLCInventors: James Jorasch, Isaac W. Hock, Geoffrey Gelman, Michael Werner, Gennaro Rendino, Christopher Capobianco
-
Patent number: 11804029Abstract: The present disclosure relates to a hierarchical constraint (HC) based method and system for classifying fine-grained graptolite images. The method includes: constructing a graptolite fossil dataset; extracting features in graptolite images; calculating the similarity between graptolite images, and performing weighting according to a genetic relationship among species to obtain a weighted HC loss function (HC-Loss) of all graptolite images; calculating cross-entropy loss; taking a weighted sum of HC-Loss and CE-Loss as a total loss function in a training stage; and performing model training. The system of the present disclosure includes a processor and a memory.Type: GrantFiled: December 28, 2022Date of Patent: October 31, 2023Assignees: Nanjing Institute of Geology and Palaeontology, CAS, Tianjin UniversityInventors: Honghe Xu, Yaohua Pan, Zhibin Niu
-
Patent number: 11797854Abstract: An image processing device has circuitry, which is configured to obtain image data, the image data being generated on the basis of a non-linear mapping defining a mapping between an object plane and an image plane; and to process the image data by applying a kernel of an artificial network to the image data based on the non-linear mapping.Type: GrantFiled: June 26, 2020Date of Patent: October 24, 2023Assignee: Sony Semiconductor Solutions CorporationInventors: Lev Markhasin, Yalcin Incesu
-
System and method to use machine learning to ensure proper installation and/or repair by technicians
Patent number: 11790521Abstract: A system for installation or repair work includes a mobile device and a central server. The mobile device includes a camera and a first processor. The first processor is configured to execute processing instructions including an algorithm to evaluate photographs recorded by the camera. The central server is configured to wirelessly communicate with the mobile device. The central server includes a second processor configured to execute control instructions stored on a second memory to cause the central server to: (i) receive at least one photograph evaluated by the first processor of the mobile device; (ii) perform machine learning using the at least one photograph to improve the algorithm used to evaluate the at least one photograph by the first processor; (iii) update the processing instructions using the improved algorithm; and (iv) transmit the updated processing instructions to the mobile device to enable evaluation of a subsequent photograph.Type: GrantFiled: April 10, 2020Date of Patent: October 17, 2023Assignee: HUGHES NETWORK SYSTEMS, LLCInventor: Anurag Bhatnagar -
Patent number: 11790682Abstract: An apparatus for performing image analysis to identify human actions represented in an image, comprising: a joint-determination module configured to analyse an image depicting one or more people using a first computational neural network to determine a set of joint candidates for the one or more people depicted in the image; a pose-estimation module configured to derive pose estimates from the set of joint candidates that estimate a body configuration for the one or more people depicted in the image; and an action-identification module configured to analyse a region of interest within the image identified from the derived pose estimates using a second computational neural network to identify an action performed by a person depicted in the image.Type: GrantFiled: July 29, 2021Date of Patent: October 17, 2023Assignee: STANDARD COGNITION, CORP.Inventors: Razwan Ghafoor, Peter Rennert, Hichame Moriceau
-
Patent number: 11790490Abstract: An apparatus and method for efficiently improving virtual/real interactions in augmented reality. For example, one embodiment of a method comprises: capturing a raw image including depth data; identifying one or more regions of interest based on a detected spatial proximity of one or more virtual objects and one or more real objects; generating a super-resolution map of the one or more regions of interest using machine-learning techniques or results thereof; detecting interactions between the virtual objects and the real objects using the super-resolution map; and performing one or more graphics processing or general purpose processing operations based on the detected interactions.Type: GrantFiled: September 28, 2021Date of Patent: October 17, 2023Assignee: INTEL CORPORATIONInventors: Zhengmin Li, Atsuo Kuwahara, Deepak Vembar
-
Patent number: 11782453Abstract: An image-based position assessment method is performed in conjunction with a mechanism onboard an agricultural machine and containing a traveling component. In embodiments, the method includes receiving, at an image processing system, camera images captured by a diagnostic camera mounted to the agricultural machine. A field of view of the diagnostic camera at least partially encompasses an intended motion path along which the traveling component is configured to travel. The image processing system analyzes the camera images to determine whether a recorded time-dependent component position of the traveling component deviates excessively from an expected time-dependent component position of the traveling component, as taken along the intended motion path.Type: GrantFiled: February 16, 2021Date of Patent: October 10, 2023Assignee: DEERE & COMPANYInventor: Timothy J. Kraus
-
Patent number: 11783567Abstract: Embodiments may: select a set of training images; extract a first set of features from each training image of the set of training images to generate a first feature tensor for each training image; extract a second set of features from each training image to generate a second feature tensor for each training image; reduce a dimensionality of each first feature tensor to generate a first modified feature tensor for each training image; reduce a dimensionality of each second feature tensor to generate a second modified feature tensor for each training image; construct a first generative model representing the first set of features and a second generative model representing the second set of features of the set of training images; identify a first candidate image; and apply a regression algorithm to the first candidate image and each of the first generative model and the second generative model to determine whether the first candidate image is similar to the set of training images.Type: GrantFiled: April 25, 2023Date of Patent: October 10, 2023Assignee: Vizit Labs, Inc.Inventors: Jehan Hamedi, Zachary Halloran, Elham Saraee
-
Patent number: 11776240Abstract: A squeeze-enhanced axial transformer (SeaFormer) for mobile semantic segmentation is disclosed, including a shared stem, a context branch, a spatial branch, a fusion module and a light segmentation head, wherein the shared stem produces a feature map; the context branch obtains context-rich information; the spatial branch obtains spatial information; the fusion module incorporates the features in the context branch into the spatial branch; and the light segmentation head receives the feature from the fusion module and output the results. This application is also related to the layer of the SeaFormer, as well as methods thereof.Type: GrantFiled: January 27, 2023Date of Patent: October 3, 2023Assignee: FUDAN UNIVERSITYInventors: Li Zhang, Qiang Wan, Jiachen Lu, Zilong Huang, Gang Yu
-
Patent number: 11768913Abstract: A method may include executing a neural network to extract a first plurality of features from a plurality of first training images and a second plurality of features from a second training image; generating a model comprising a first image performance score for each of the plurality of first training images and a feature weight for each feature, the feature weight for each feature of the first plurality of features calculated based on an impact of a variation in the feature on first image performance scores of the plurality of first training images; training the model by adjusting the impact of a variation of each of a first set of features that correspond to the second plurality of features; executing the model using a third set of features from a candidate image to generate a candidate image performance score; and generating a record identifying the candidate image performance score.Type: GrantFiled: November 3, 2022Date of Patent: September 26, 2023Assignee: Vizit Labs, Inc.Inventors: Elham Saraee, Jehan Hamedi, Zachary Halloran
-
Patent number: 11752637Abstract: A method for controlling a robotic device based on observed object locations includes obtaining a group of observations of an object type in an environment based on identifying one or more first objects associated with the object type over a period of time in the environment. The method also includes localizing the object type in the environment based on a location cluster comprising a group of nodes. Each node in the group of nodes associated with one observation of the group of observations. The method further includes generating a cost map indicating a probability distribution of the object type in relation to the localized object type in the environment. The method still further includes controlling the robotic device to perform an action in the environment based on the cost map and a map of the environment.Type: GrantFiled: May 2, 2022Date of Patent: September 12, 2023Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventor: Brandon Northcutt
-
Patent number: 11727729Abstract: Disclosed herein is a signature verification apparatus including a first verification circuitry verifying a user's signature by comparing dynamic signature data indicating a change of a user's writing state over time during signing of his or her name by the user and reference data for dynamic signature, a second verification circuitry verifying the user's signature by comparing static signature data indicating a writing path during signing of his or her name by the user and reference data for static signature, and a data registration circuitry registering the reference data for dynamic signature and register the reference data for static signature. In a case where the reference data for static signature has yet to be registered by the data registration circuitry, the second verification circuitry verifies the user's signature by regarding static signature data generated from the reference data for dynamic signature already registered as the reference data for static signature.Type: GrantFiled: August 6, 2021Date of Patent: August 15, 2023Assignee: Wacom Co., Ltd.Inventor: Nicholas Victor Mettyear