Patents Examined by Stephen P. Coleman
  • Patent number: 11978264
    Abstract: Systems and methods for constructing and managing a unique road sign knowledge graph across various countries and regions is disclosed. The system utilizes machine learning methods to assist humans when comparing a new sign template with a plurality of stored sign templates to reduce or eliminate redundancy in the road sign knowledge graph. Such a machine learning method and system is also used in providing visual attributes of road signs such as sign shapes, colors, symbols, and the like. If the machine learning determines that the input road sign template is not found in the road sign knowledge graph, the input sign template can be added to the road sign knowledge graph. The road sign knowledge graph can be maintained to add signs templates that are not already in the knowledge graph but are found in real-world by integrating human annotator's feedback during ground truth generation for machine learning.
    Type: Grant
    Filed: August 17, 2021
    Date of Patent: May 7, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Ji Eun Kim, Kevin H. Huang, Mohammad Sadegh Norouzzadeh, Shashank Shekhar
  • Patent number: 11978186
    Abstract: An image processing apparatus and method is provided and includes one or more processors and one or more memory devices that store instructions. When the instructions are executed by the one or more processors, the one or more processors are configured to perform operations including obtaining a first image which is obtained by capturing based on a first shooting parameter, generating, by image processing on the first image, a second image corresponding to a second shooting parameter which is different from the first shooting parameter and using at least the generated second image as a training image for a model that is used in noise estimation processing for an input image.
    Type: Grant
    Filed: October 21, 2020
    Date of Patent: May 7, 2024
    Assignees: Canon U.S.A., Inc., Canon Kabushiki Kaisha
    Inventor: Hironori Aokage
  • Patent number: 11967096
    Abstract: A depth estimation from focus method and system includes receiving input image data containing focus information, generating an intermediate attention map by an AI model, normalizing the intermediate attention map into a depth attention map via a normalization function, and deriving expected depth values for the input image data containing focus information from the depth attention map. The AI model for depth estimation can be trained unsupervisedly without ground truth depth maps. The AI model of some embodiments is a shared network estimating a depth map and reconstructing an AiF image from a set of images with different focus positions.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: April 23, 2024
    Assignee: MEDIATEK INC.
    Inventors: Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Ning-Hsu Wang
  • Patent number: 11961289
    Abstract: Embodiments of the disclosed technologies are capable of inputting, to a machine-learned model that has been trained to recognize a horticultural product in digital imagery, digital video data comprising frames that represent a view of the horticultural product in belt-assisted transit from a picking area of a field to a harvester bin; outputting, by the machine-learned model, annotated video data; using the annotated video data, computing quantitative data comprising particular counts of the individual instances of the horticultural product associated with particular timestamp data; using the timestamp data, mapping the quantitative data to geographic location data to produce a digital yield map; causing display of the digital yield map on a field manager computing device.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: April 16, 2024
    Assignee: CLIMATE LLC
    Inventors: John M. McNichols, Daniel A. Williams, Keely Roth, Ali Hamidisepehr
  • Patent number: 11961273
    Abstract: A method of preprocessing incoming video data of at least one region of interest from a camera collecting video data having a first field of view is disclosed herein that includes receiving the incoming video data from the camera; preprocessing the incoming video data, by a computer processor, according to preprocessing parameters defined within a runtime configuration file, with the preprocessing including formatting the incoming video data to create first video data of a first region of interest with a second field of view that is less than the first field of view; and publishing the first video data of the first region of interest to an endpoint to allow access by a first subscriber.
    Type: Grant
    Filed: June 1, 2023
    Date of Patent: April 16, 2024
    Assignee: Insight Direct USA, Inc.
    Inventor: Amol Ajgaonkar
  • Patent number: 11954144
    Abstract: An example system includes a processor to receive, a randomly generated alpha-map, a pair of training images, and a pair of training texts associated with the pair of training images. The processor is to generate a blended image based on the randomly generated alpha-map and the pair of training images. The processor is to train a visual language grounding model to separate the blended image into a pair of heatmaps identifying portions of the blended image corresponding to each of the training images using a separation loss.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: April 9, 2024
    Assignee: International Business Machines Corporation
    Inventors: Assaf Arbelle, Leonid Karlinsky, Sivan Doveh, Joseph Shtok, Amit Alfassy
  • Patent number: 11948249
    Abstract: Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
    Type: Grant
    Filed: October 7, 2022
    Date of Patent: April 2, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Young-Ki Baik, ChaeSeong Lim, Duck Hoon Kim
  • Patent number: 11948343
    Abstract: Disclosed are an image matching method and apparatus. The image matching method is inclusive of steps of obtaining a panoramic image of at least one subspace in a 3D space and a 2D image of the 3D space; acquiring a 2D image of the at least one subspace in the 3D space; performing 3D reconstruction on the panoramic image of the at least one subspace, and procuring a projection image corresponding to the panoramic image of the at least one subspace; and attaining a matching relationship between the panoramic image of the at least one subspace and the 2D image of the at least one subspace, and establishing an association relationship between the panoramic image of the at least one subspace and the 2D image of the at least one subspace between which the matching relationship has been generated.
    Type: Grant
    Filed: July 21, 2021
    Date of Patent: April 2, 2024
    Assignee: Ricoh Company, Ltd.
    Inventors: Haijing Jia, Hong Yi, Liyan Liu, Wei Wang
  • Patent number: 11941828
    Abstract: A feature point detection apparatus for detecting feature points in image data includes an image data providing unit for providing the image data, a key point determination unit for determining key points in the image data, a feature determination unit for determining features associated with the key points, each describing a local environment of a key point in the image data, and a feature point providing unit for providing the feature points. A feature point is represented by the position of a key point in the image data and the associated features. The image data comprise intensity data and associated depth data, and the determination of the key points and the associated features is based on a local analysis of the image data in dependence on both the intensity data and the depth data.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: March 26, 2024
    Assignee: BASLER AG
    Inventor: Jens Dekarz
  • Patent number: 11941855
    Abstract: A device comprises one or more processors configured to: obtain a value for a first laser, the value for the first laser indicating a number of probes in an azimuth direction of the first laser; decode a syntax element for a second laser, wherein the syntax element for the second laser indicates a difference between the value for the first laser and a value for the second laser, the value for the second laser indicating a number of probes in the azimuth direction of the second laser; determine the value for the second laser indicating the number of probes in the azimuth direction of the second laser based on the first value and the indication of the difference between the value for the first laser and the value for the second laser; and decode a point based on the number of probes in the azimuth direction of the second laser.
    Type: Grant
    Filed: April 7, 2021
    Date of Patent: March 26, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Bappaditya Ray, Adarsh Krishnan Ramasubramonian, Geert Van der Auwera, Louis Joseph Kerofsky, Marta Karczewicz
  • Patent number: 11941895
    Abstract: A headcount determination system is disclosed that includes a first camera and a processing unit configured to receive data communications that include an image frame captured by the first camera. The processing unit includes logic that, when executed by the processing unit, causes performance of operations including performing operations comprising a pre-processing phase including dividing the image frame into sub-frames, performing operations comprising an inference phase including scoring the sub-frames with a machine learning (ML) model, performing a threshold comparison with a score for each sub-frame and a threshold, and incrementing a headcount when the score satisfies the threshold comparison, and when the threshold comparison has been performed for the score for each sub-frame, providing the headcount as output. The operations further include performing a scaling operation on the image frame that adjusts a size of the image frame by a scaling factor.
    Type: Grant
    Filed: April 1, 2021
    Date of Patent: March 26, 2024
    Assignee: Endera Corporation
    Inventors: John Joseph Walsh, Zichu Ye
  • Patent number: 11934962
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for associating objects are provided. For example, the disclosed technology can receive sensor data associated with the detection of objects over time. An association dataset can be generated and can include information associated with object detections of the objects at a most recent time interval and object tracks of the objects at time intervals in the past. A subset of the association dataset including the object detections that satisfy some association subset criteria can be determined. Association scores for the object detections in the subset of the association dataset can be determined. Further, the object detections can be associated with the object tracks based on the association scores for each of the object detections in the subset of the association dataset that satisfy some association criteria.
    Type: Grant
    Filed: April 10, 2023
    Date of Patent: March 19, 2024
    Assignee: UATC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Abhishek Sen, Shivam Gautam
  • Patent number: 11934955
    Abstract: Systems and methods for more accurate and robust determination of subject characteristics from an image of the subject. One or more machine learning models receive as input an image of a subject, and output both facial landmarks and associated confidence values. Confidence values represent the degrees to which portions of the subject's face corresponding to those landmarks are occluded, i.e., the amount of uncertainty in the position of each landmark location. These landmark points and their associated confidence values, and/or associated information, may then be input to another set of one or more machine learning models which may output any facial analysis quantity or quantities, such as the subject's gaze direction, head pose, drowsiness state, cognitive load, or distraction state.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: March 19, 2024
    Assignee: NVIDIA Corporation
    Inventors: Nuri Murat Arar, Niranjan Avadhanam, Nishant Puri, Shagan Sah, Rajath Shetty, Sujay Yadawadkar, Pavlo Molchanov
  • Patent number: 11928831
    Abstract: An information processing apparatus generates shape data on an object included in a first partial space based on one or more captured images obtained from one or more of a plurality of imaging apparatuses and a first parameter corresponding to the first partial space, the first partial space being included in a plurality of partial spaces in an imaging space which is an image capturing target for the plurality of imaging apparatuses, and generates shape data on an object included in a second partial space based on one or more captured images obtained from one or more of the plurality of imaging apparatuses and a second parameter corresponding to the second partial space, the second partial space being included in the plurality of partial spaces, the second parameter being different from the first parameter.
    Type: Grant
    Filed: September 17, 2020
    Date of Patent: March 12, 2024
    Assignee: Canon Kabushiki Kaisha
    Inventor: Yasufumi Takama
  • Patent number: 11928836
    Abstract: An electronic device mounted on a fixed or a movable apparatus is provided. The electronic device may comprise a neural processing unit (NPU), including a plurality of processing elements (PEs), configured to process an operation of an artificial neural network model trained to detect or track at least one object and output an inference result based on at least one image acquired from at least one camera; and a signal generator generating a signal applicable to the at least one camera.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: March 12, 2024
    Assignee: DEEPX CO., LTD.
    Inventors: Ha Joon Yu, You Jun Kim, Lok Won Kim
  • Patent number: 11921817
    Abstract: A computer-implemented unsupervised learning method of training a video feature extractor. The video feature extractor is configured to extract a feature representation from a video sequence. The method uses training data representing multiple training video sequences. From a training video sequence of the multiple training video sequences, a current subsequence; a preceding subsequence preceding the current subsequence; and a succeeding subsequence succeeding the current subsequence are selected. The video feature extractor is applied to the current subsequence to extract a current feature representation of the current subsequence. A training signal is derived from a joint predictability of the preceding and succeeding subsequences given the current feature representation. The parameters of the video feature extractor are updated based on the training signal.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: March 5, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Mehdi Noroozi, Nadine Behrmann
  • Patent number: 11919511
    Abstract: A driving support control device for a vehicle includes an acquisition unit configured to acquire, from a first detector which detects change of a brightness value of an object which occurs in accordance with displacement of the object, information indicating change of a brightness value of a partially shielded object partially shielded by an obstacle which occurs in accordance with displacement of the partially shielded object, as a first detection signal, and a control unit configured to, in a case where it is determined, by using the first detection signal, that the partially shielded object is moving, cause a driving support execution device to execute collision prevention support for preventing a collision with the partially shielded object.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: March 5, 2024
    Assignee: DENSO CORPORATION
    Inventors: Takahisa Yokoyama, Noriyuki Ido
  • Patent number: 11922700
    Abstract: A method and apparatus for aiding a police officer in writing a report by creating a depiction of an incident scene is provided herein. During operation a drone will photograph an incident scene from above. Relevant objects will be identified and a depiction of the incident scene will be created by overlaying the relevant photographed real world objects onto a map retrieved from storage. The depiction of the incident scene will be made available to police officers to increase the officers' efficiency in report writing. More particularly, officers will no longer need to re-create the depiction of the incident scene by hand. Instead, the officer will be able to use the depiction of the incident scene.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: March 5, 2024
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Bing Qin Lim, Janet A Manzanares, Jeff Talbot, Maryam Eneim, Todd M Conklin, Jami Perkins, Jon Milan
  • Patent number: 11915486
    Abstract: A system includes one or more video capture devices and a processor coupled to each video capture device. Each processor is operable to direct its respective video capture device to obtain an image of a monitored area and process the image to identify objects of interest represented in the image. The processor is also operable to generate bounding perimeter virtual objects for the identified objects of interest, each bounding perimeter virtual object surrounding at least part of its respective object of interest. The processor is further operable to determine danger zones for the identified objects of interest based on the bounding perimeter virtual objects. The processor is further operable to determine at least one near-miss condition based at least in part on an actual or predicted overlap of danger zones for multiple objects of interest, and may optionally generate an alert at least partially in response to the near-miss condition.
    Type: Grant
    Filed: March 31, 2023
    Date of Patent: February 27, 2024
    Assignee: Ubicquia IQ LLC
    Inventors: Morné Neser, Samuel Leonard Holden, Sébastien Magnan
  • Patent number: 11907848
    Abstract: This application provides a method for training a pose recognition model performed at a computer device. The method includes: inputting a sample image labeled with human body key points into a feature map model included in a pose recognition model, to output a feature map of the sample image; inputting the feature map into a two-dimensional (2D) model included in the pose recognition model, to output 2D key point parameters used for representing a 2D human body pose; input a target human body feature map cropped from the feature map and the 2D key point parameter into a three-dimensional (3D) model included in the pose recognition model, to output 3D pose parameters used for representing a 3D human body pose; constructing a target loss function based on the 2D key point parameters and the 3D pose parameters; and updating the pose recognition model based on the target loss function.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jingmin Luo, Xiaolong Zhu, Yitong Wang, Xing Ji