Patents Examined by Andrews W. Johns
-
Patent number: 11972634Abstract: An image processing method includes receiving an image frame, detecting a face region of a user in the image frame, aligning a plurality of preset feature points in a plurality of feature portions included in the face region, performing a first check on a result of the aligning based on a first region corresponding to a combination of the feature portions, performing a second check on the result of the aligning based on a second region corresponding to an individual feature portion of the feature portions, redetecting a face region based on a determination of a failure in passing at least one of the first check or the second check, and outputting information on the face region based on a determination of a success in passing the first check and the second check.Type: GrantFiled: December 22, 2022Date of Patent: April 30, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Dongwoo Kang, Jingu Heo, Byong Min Kang
-
Patent number: 11961241Abstract: Examples disclosed herein may involve a computing system that is operable to (i) receive a sequence of images captured by a camera associated with a vehicle, (ii) for each of at least a subset of the received images in which a given agent is detected, (a) generate a respective pixel mask that identifies a boundary of the given agent within the image, (b) identify, as a tracking point for the given agent within the image, at least one given pixel within the pixel mask that is representative of an estimated intersection point between the given agent and a ground plane, and (c) determine a position of the given agent at the capture time of the image based on the tracking point and information regarding the ground plane, and (iii) determine a trajectory for the given agent based on the determined positions of the given agent.Type: GrantFiled: July 7, 2020Date of Patent: April 16, 2024Assignee: Lyft, Inc.Inventors: Lorenzo Peppoloni, Michal Witkowski
-
Patent number: 11954868Abstract: Methods, systems, an apparatus, including computer programs encoded on a storage device, for tracking human movement in video images. A method includes obtaining a first image of a scene captured by a camera; identifying a bounding box around a human detected in the first image; determining a scale amount that corresponds to a size of the bounding box; obtaining a second image of the scene captured by the camera after the first image was captured; and detecting the human in the second image based on both the first image scaled by the scale amount and the second image scaled by the scale amount. Detecting the human in the second image can include identifying a second scaled bounding box around the human detected in the second image scaled by the scale amount.Type: GrantFiled: October 11, 2022Date of Patent: April 9, 2024Assignee: ObjectVideo Labs, LLCInventors: Sung Chun Lee, Gang Qian, Sima Taheri, Sravanthi Bondugula, Allison Beach
-
Patent number: 11953910Abstract: The described positional awareness techniques employing sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to find new area to cover by a robot encountering an unexpected obstacle traversing an area in which the robot is performing an area coverage task. The sensory data are gathered from an operational camera and one or more auxiliary sensors.Type: GrantFiled: April 25, 2022Date of Patent: April 9, 2024Assignee: Trifo, Inc.Inventors: Zhe Zhang, Weikai Li, Qingyu Chen, Yen-Cheng Liu
-
Patent number: 11954900Abstract: The invention pertains to methods for monitoring the operational status of a home automation system through extrinsic visual and audible means. Initial training periods involve capturing image and audio data representative of nominal operation, which is then processed to identify operational indicators. Unsupervised machine learning models are trained with these indicators to construct a model of normalcy and identify expectation violations in the system's operational pattern. After meeting specific stopping criteria, real-time monitoring is initiated. When an expectation violation is detected, contrastive collages or sequences are generated comprising nominal and anomalous data. These are then transmitted to an end user, effectively conveying the context of the detected anomalies. Further features include providing deep links to smartphone applications for home automation configuration and the use of auditory scene analysis techniques.Type: GrantFiled: September 6, 2023Date of Patent: April 9, 2024Assignee: University of Central Florida Research Foundation, Inc.Inventors: Gregory Welch, Gerd Bruder, Ryan Schubert, Austin Erickson
-
Patent number: 11948318Abstract: A system and methods for attaining optimal precision direction and ranging through air and across a refractive boundary separating air from a liquid or plasma using stereo-camera and time-of-flight techniques, and employing minimum variance sub-sample offset estimation. The system and methods can also track measurement and estimation variances as they propagate through the system in order to provide a comprehensive precision analysis of all estimated quantities.Type: GrantFiled: May 3, 2021Date of Patent: April 2, 2024Inventor: Sadiki Pili Fleming-Mwanyoha
-
Patent number: 11941843Abstract: A method for determining a location of a trailer in an image includes obtaining at least one real-time image from a vehicle. The at least one real-time image is processed with a controller on the vehicle to obtain a feature patch describing at least one real-time image. A convolution is performed of the feature patch and each filter from a set of filters with the filter being based on data representative of known trailers. A location of a trailer is determined in the at least one real-time image based on the convolution between the feature patch and each filter from the set of filters.Type: GrantFiled: July 13, 2021Date of Patent: March 26, 2024Assignee: Continental Autonomous Mobility US, LLCInventors: Eduardo Jose Ramirez Llanos, Dominik Froehlich, Brandon Herzog, Ibro Muharemovic
-
Patent number: 11941820Abstract: A method for tracking an object in a low frame rate video is provided. Matching processes are performed between consecutive frames by using conversion feature maps acquired by converting each of features on feature maps of the consecutive frames into feature descriptors including each corresponding feature information and each corresponding location information, thereby allowing object tracking regardless of whether time interval per frame is long or short. The object tracking is performed by matching feature descriptors on a plurality of pyramid feature maps on an entire area of a next frame and feature descriptors on a plurality of cropped feature maps generated by cropping object areas extracted on a current frame, thereby allowing not only quick matching between the cropped areas and the entire area but also the increased accuracy due to no limitation of the feature searching area.Type: GrantFiled: October 27, 2023Date of Patent: March 26, 2024Assignee: Superb AI Co., Ltd.Inventor: Kye Hyeon Kim
-
Patent number: 11935159Abstract: The disclosure relates to a system and method for medical imaging. The method may include: move, by a motion controller, a phantom along an axis of a scanner to a plurality of phantom positions; acquire, by a scanner of the imaging device, a first set of PET data relating to the phantom at the plurality of phantom positions; and store the first set of PET data as an electrical file. The length of an axis of the phantom may be shorter than the length of an axis of the scanner, and at least one of the plurality of phantom positions may be inside a bore of the scanner.Type: GrantFiled: January 30, 2023Date of Patent: March 19, 2024Assignee: SHANGHAI UNITED IMAGING HEALTHCARE CO., LTD.Inventors: Weiping Liu, Xiaoyue Gu, Youjun Sun
-
Patent number: 11935264Abstract: Various implementations disclosed herein include devices, systems, and methods for pose estimation using one point correspondence, one line correspondence, and a directional measurement. In various implementations, a device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, a method includes obtaining an image corresponding to a physical environment. A first correspondence between a first set of pixels in the image and a spatial point in the physical environment is determined. A second correspondence between a second set of pixels in the image and a spatial line in the physical environment is determined. Pose information is generated as a function of the first correspondence, the second correspondence, and a directional measurement.Type: GrantFiled: March 28, 2022Date of Patent: March 19, 2024Assignee: APPLE INC.Inventors: Jai Prakash, Lina Maria Paz-Perez, Oliver Thomas Ruepp
-
Patent number: 11928255Abstract: A system which, with data provided by one or more sensors, detects a user's alteration of the geometries of parts of his face, head, neck, and/or shoulders. It determines the extent of each alteration and normalizes it with respect to the maximum possible range of each alteration so as to assign to each part-specific alteration a numeric score indicative of its extent. The normalized part-specific scores are combined so as to produce a composite numeric code representative of the complete set of simultaneously-executed geometric alterations. Each composite code is translated, or interpreted, relative to an appropriate context defined by an embodiment, an application executing on an embodiment, or by the user. For example, each composite code might be interpreted as, or assigned to, a specific alphanumeric letter, a color, a musical note, etc.Type: GrantFiled: February 10, 2022Date of Patent: March 12, 2024Inventors: Brian Lee Moffat, Rin In Chen
-
Patent number: 11927057Abstract: A sensor may detect glare from a recorded image and a shade position of a motorized window treatment may be controlled based on the position of the detected glare in the image. A luminance of a pixel may be calculated in an image and a glare condition may be detected based on the luminance of the pixel. For example, the sensor may start at a first pixel in a bottom row of pixels and step through each of the pixels on the bottom row before moving to a next row of pixels. When the sensor detects a glare condition, the sensor may cease processing the remaining pixels of the image. The sensor may calculate a background luminance of the image by reordering the pixels of the image from darkest to lightest and calculating the luminance of a pixel that is a predetermined percentage from the darkest pixel.Type: GrantFiled: October 8, 2020Date of Patent: March 12, 2024Assignee: Lutron Technology Company LLCInventors: Craig A. Casey, Brent Protzman
-
Patent number: 11928812Abstract: The present technology is generally directed to systems and methods for inspecting an asset in a communication-denied environment. The present technology can include receiving, via an electronic device, an image of a dashboard of the asset; determining, based on the image, a working mode and one or more status identifiers of the asset; transmitting the working mode and the one or more status identifiers to an inspection system; and/or receiving information regarding the status of the asset based on the working mode and one or more status identifiers.Type: GrantFiled: September 3, 2021Date of Patent: March 12, 2024Assignee: Caterpillar Inc.Inventors: Anatoly Belkin, Arnold Sheynman
-
Patent number: 11914133Abstract: Apparatus and methods are described for analyzing a bodily sample. A microscope system acquires one or more microscope images of the bodily sample. A computer processor identifies elements as being candidates of a given entity, in the one or more images. At least one sample-informative feature, relating to a characteristic of the candidates of the given entity in the sample as a whole, is extracted from the one or more images. A characteristic of the sample is determined at least partially based upon the sample-informative feature, and an output is generated in response thereto. Other applications are also described.Type: GrantFiled: January 5, 2022Date of Patent: February 27, 2024Assignee: S.D. Sight Diagnostics Ltd.Inventors: Yochay Shlomo Eshel, Natalie Lezmy, Dan Gluck, Arnon Houri Yafin, Joseph Joel Pollak
-
Patent number: 11915448Abstract: A method and apparatus with pose determination are provided. The method includes determining first pose information of a computing apparatus dependent on motion sensor information of motion of the computing apparatus, estimating second pose information of the computing apparatus dependent on feature point position information that is pre-defined for an object, and one or more feature points of the object that are extracted from an image captured by the computing apparatus, determining respective reliability values of the first pose information and the second pose information, and determining a pose of the computing apparatus for augmented reality (AR) content based on the first pose information, the second pose information, and the respective reliability values. AR content may be generated based on the determined pose.Type: GrantFiled: August 3, 2021Date of Patent: February 27, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Donghoon Sagong, Hojin Ju, Jaehwan Pi, Hyun Sung Chang
-
Patent number: 11908145Abstract: A method for assessing whether one or more trackers are positioned on a person according to a predetermined configuration of tracker positions, comprising: transmitting, from a computing device to at least one tracker of the one or more trackers, an instruction to change an operation of the light emitter of the tracker to which the instruction is transmitted; taking one or more images by the optical sensing device; digitally processing the one or more images to digitally determine both first positions of a plurality of joints of the person on each image, and second positions of the one or more trackers positioned on the person on each image based on both a light of the light emitter of each of the one or more trackers and the transmitted instructions; digitally determining on which body member each of the one or more trackers is positioned on the person based on the first and second positions; and digitally comparing the position of each of the one or more trackers on the body members with the predetermined cType: GrantFiled: March 29, 2023Date of Patent: February 20, 2024Assignee: SWORD HEALTH, S.A.Inventors: José Carlos Coelho Alves, Márcio Filipe Moutinho Colunas, Pedro Henrique Oliveira Santos, Virgílio António Ferro Bento
-
Patent number: 11900536Abstract: The described positional awareness techniques employing visual-inertial sensory data gathering and analysis hardware with reference to specific example implementations implement improvements in the use of sensors, techniques and hardware design that can enable specific embodiments to provide positional awareness to machines with improved speed and accuracy.Type: GrantFiled: May 6, 2022Date of Patent: February 13, 2024Assignee: Trifo, Inc.Inventors: Zhe Zhang, Grace Tsai, Shaoshan Liu
-
Patent number: 11900583Abstract: A method for identifying a state of a food package by using a camera can include capturing image data depicting a section of the food package by using the camera. The section includes at least one package feature, The method can further include identifying a package feature sub-set of the image data depicting the at least one package feature provided in the section, and determining the state of the food package based on the package feature sub-set of the image data. The state is selected from a food holding state and an emptied state.Type: GrantFiled: March 19, 2021Date of Patent: February 13, 2024Assignee: Tetra Laval Holdings & Finance S.A.Inventors: Sara Egidi, Gabriele Molari, Federico Campo
-
Patent number: 11893769Abstract: A computer-implemented method of generating metadata from an image may comprise sending the image to an object detection service, which generates detections metadata from the image. The image may also be sent to a visual features extractor, which extracts visual features metadata from the image. The generated detections metadata may then be sent to an uncertainty score calculator, which computes an uncertainty score from the detections metadata. The uncertainty score may be related to a level of uncertainty within the detections metadata. The image, the visual features metadata, the detections metadata and the uncertainty score may then be stored in a database accessible over a computer network.Type: GrantFiled: June 4, 2021Date of Patent: February 6, 2024Assignee: VADE USA, INCORPORATEDInventors: Mehdi Regina, Maxime Marc Meyer, Sébastien Goutal
-
Patent number: 11893691Abstract: A method, computer program, and computer system is provided for processing point cloud data. Quantized point cloud data including a plurality of voxels is received. An occupancy map is generated for the quantized point cloud corresponding to lost voxels during quantization from among the plurality of voxels. A point cloud is reconstructed from the quantized point cloud data based on populating the lost voxels.Type: GrantFiled: June 11, 2021Date of Patent: February 6, 2024Assignee: TENCENT AMERICA LLCInventors: Anique Akhtar, Wen Gao, Xiang Zhang, Shan Liu