Patents Examined by Tracy Mangialaschi
  • Patent number: 11900635
    Abstract: A system and method of organically generating a camera-pose map is disclosed. A target image is obtained of a location deemed suitable for augmenting with a virtual augmentation or graphic. An initial camera-pose map is created having a limited number of calibrated camera-pose images having calculated camera-pose locations and homographies to the target image. Then, during the event, the system automatically obtains current images of the event venue and determines homographies to the nearest calibration camera-pose image in the camera-pose map. The separation in camera-pose space between the current images and the camera-pose images are calculated. If this separation is less than a predetermined threshold, that current image is fully calibrated and added to the camera-pose map, thereby growing the map organically.
    Type: Grant
    Filed: May 15, 2023
    Date of Patent: February 13, 2024
    Inventors: Oran Gilad, Samuel Chenillo, Oren Steinfeld
  • Patent number: 11893766
    Abstract: A neural network system, includes: a processor configured to detect a plurality of object candidates included in a first image, generate metadata corresponding to the plurality of object candidates based on the first image, and set data processing orders of the plurality of object candidates based on the metadata; and at least one resource configured to perform data processing with respect to the plurality of object candidates. The processor is configured to sequentially provide pieces of information related to data processing of the plurality of object candidates to the at least one resource according to the set data processing orders, and the at least one resource is configured to sequentially perform data processing with respect to the plurality of object candidates according to an order in which a piece of information related to data processing of each of the plurality of object candidates is received.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: February 6, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Seungsoo Yang
  • Patent number: 11892289
    Abstract: The invention generally relates to methods for manually calibrating imaging systems such as optical coherence tomography systems. In certain aspects, an imaging system displays an image showing a target and a reference item. A user looks at the image and indicates a point within the image near the reference item. A processer detects an actual location of the reference item within an area around the indicated point. The processer can use an expected location of the reference item with the detected actual location to calculate a calibration value and provide a calibrated image. In this way, a user can identify the actual location of the reference point and a processing algorithm can give precision to the actual location.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: February 6, 2024
    Assignee: PHILIPS IMAGE GUIDED THERAPY CORPORATION
    Inventors: Andreas Johansson, Jason Y. Sproul
  • Patent number: 11882784
    Abstract: Implementations are described herein for predicting soil organic carbon (“SOC”) content for agricultural fields detected in digital imagery. In various implementations, one or more digital images depicting portion(s) of one or more agricultural fields may be processed. The one or more digital images may have been acquired by a vision sensor carried through the field(s) by a ground-based vehicle. Based on the processing, one or more agricultural inferences indicating agricultural practices or conditions predicted to affect SOC content may be determined. Based on the agricultural inferences, one or more predicted SOC measurements for the field(s) may be determined.
    Type: Grant
    Filed: March 1, 2023
    Date of Patent: January 30, 2024
    Assignee: MINERAL EARTH SCIENCES LLC
    Inventors: Cheng-En Guo, Jie Yang, Zhiqiang Yuan, Elliott Grant
  • Patent number: 11881021
    Abstract: Provided are a method of providing carbon emission management information, the method comprising extracting, by a carbon emission management information providing server, an area corresponding to a company to be evaluated from satellite image data of the company to be evaluated, calculating, by the carbon emission management information providing server, a greenhouse gas concentration of the area corresponding to the company to be evaluated from the satellite image data, calculating, by the carbon emission management information providing server, a change in vegetation index around the company to be evaluated from the satellite image data, analyzing, by the carbon emission management information providing server, a relationship between carbon emission management factors input in relation to the company to be evaluated, a change in the calculated greenhouse gas concentration, and the calculated change in vegetation index and generating, by the carbon emission management information providing server, carbon emi
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: January 23, 2024
    Assignee: AJOU UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION
    Inventors: Juyoung Kang, Sehyoung Kim, Seyeon Chun
  • Patent number: 11864494
    Abstract: Systems and methods are disclosed herein for detecting impurities of harvested plants in a receptacle of a harvester. In an embodiment, a harvester controller receives, from a camera facing the contents of the receptacle, an image of the contents. The harvester controller applies the image as input to a machine learning model. The harvester controller receives, as output from the machine learning model, an identification of an impurity of the harvested plants. The harvester controller transmits a control signal based on the impurity.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: January 9, 2024
    Assignee: Landing AI
    Inventors: Dongyan Wang, Andrew Yan-Tak Ng, Yiwen Rong, Greg Frederick Diamos, Bo Tan, Beom Sik Kim, Timothy Viatcheslavovich Rosenflanz, Kai Yang, Tian Wu
  • Patent number: 11861937
    Abstract: The facial verification apparatus is a mobile computing apparatus, including a camera to capture an image, a display, and one or more processors. While in a lock state, the image is captured and facial verification performed using a face image, or using a detected face and in response to the face being detected. The facial verification includes a matching with respect to the detected face, or obtained face image, and a registered face information. If the verification is successful, the lock state of the apparatus may be canceled and the user allowed access to the apparatus. The lock state may be cancelled when the verification is successful and the user has been determined to have been attempting to gain access to the apparatus. Face image feedback to the user may not be displayed during the detecting for, or obtaining of, the face and/or performing of the facial verification.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: January 2, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seungju Han, Minsu Ko, Deoksang Kim, Jae-Joon Han
  • Patent number: 11854333
    Abstract: Existing currency validation (CVAL) devices, systems, and methods are too slow, costly, intrusive, and/or bulky to be routinely used in common transaction locations (e.g., at checkout, at an automatic teller machine, etc.). Presented herein are devices, systems, and methods to facilitate optical validation of documents, merchandise, or currency at common transaction locations and to do so in an obtrusive and convenient way. More specifically, the present invention embraces a validation device that may be used alone or integrated within a larger system (e.g., point of sale system, kiosk, etc.). The present invention also embraces methods for currency validation using the validation device, as well as methods for improving the quality and consistency of data captured by the validation device for validation.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: December 26, 2023
    Assignee: Hand Held Products, Inc.
    Inventors: Erik Van Horn, Gennady Germaine, Christopher Allen, David J. Ryder, Paul Poloniewicz, Kevin Saber, Sean Philip Kearney, Edward Hatton, Edward C. Bremer, Michael Vincent Miraglia, Robert Pierce, William Ross Rapoport, James Vincent Guiheen, Chirag Patel, Patrick Anthony Giordano, Timothy Good, Gregory M. Rueblinger
  • Patent number: 11854225
    Abstract: A method for determining a localization pose of an at least partially automated mobile platform, the mobile platform being equipped to generate ground images of an area surrounding the mobile platform, and being equipped to receive aerial images of the area surrounding the mobile platform from an aerial-image system. The method includes: providing a digital ground image of the area surrounding the mobile platform; receiving an aerial image of the area surrounding the mobile platform; generating the localization pose of the mobile platform with the aid of a trained convolutional neural network, which has a first trained encoder convolutional-neural-network part and a second trained encoder convolutional-neural-network part.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: December 26, 2023
    Assignee: ROBERT BOSCH GMBH
    Inventors: Carsten Hasberg, Piyapat Saranrittichai, Tayyab Naseer
  • Patent number: 11854218
    Abstract: In one aspect, a method for detecting terrain variations within a field includes receiving one or more images depicting an imaged portion of an agricultural field. The method also includes classifying a portion of the plurality of pixels that are associated with soil within the imaged portion of the agricultural field as soil pixels with each soil pixel being associated with a respective pixel height. Additionally, the method includes identifying each soil pixel having a pixel height that exceeds a height threshold as a candidate ridge pixel and each soil pixel having a pixel height that is less than a depth threshold as a candidate valley pixel. The method further includes determining whether a ridge or a valley is present within the imaged portion of the agricultural field based at least in part on the candidate ridge pixels or the candidate valley pixels.
    Type: Grant
    Filed: February 12, 2021
    Date of Patent: December 26, 2023
    Assignee: CNH Industrial Canada, Ltd.
    Inventors: James W. Henry, Christopher Nicholas Warwick
  • Patent number: 11847917
    Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: December 19, 2023
    Assignee: FORD GLOBAL TECHNOLOGIES, LLC
    Inventors: Madeline Jane Schrier, Vidya Nariyambut Murali
  • Patent number: 11830246
    Abstract: A system may be configured to collect geospatial features (in vector form) such that a software application is operable to edit an object represented by at least one vector. Some embodiments may: generate, via a trained machine learning model, a pixel map based on an aerial or satellite image; convert the pixel map into vector form; and store the vectors. This conversion may include a raster phase and a vector phase. A system may be configured to obtain another image, generate another pixel map based on the other image, convert the other pixel map into vector form, and compare the vectors to identify changes between the images. Some implementations may cause identification, based on a similarity with converted vectors, of a more trustworthy set of vectors for subsequent data source conflation.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: November 28, 2023
    Assignee: CACI, Inc.—Federal
    Inventors: Jacob A. Fleisig, Evan M. Colvin, Peter Storm Simonson, Nicholas Grant Chidsey
  • Patent number: 11823388
    Abstract: A farming machine moves through a field and includes an image sensor that captures an image of a plant in the field. A control system accesses the captured image and applies the image to a machine learned plant identification model. The plant identification model identifies pixels representing the plant and categorizes the plant into a plant group (e.g., plant species). The identified pixels are labeled as the plant group and a location of the pixels is determined. The control system actuates a treatment mechanism based on the identified plant group and location. Additionally, the images from the image sensor and the plant identification model may be used to generate a plant identification map. The plant identification map is a map of the field that indicates the locations of the plant groups identified by the plant identification model.
    Type: Grant
    Filed: January 9, 2023
    Date of Patent: November 21, 2023
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: Christopher Grant Padwick, William Louis Patzoldt, Benjamin Kahn Cline, Olgert Denas, Sonali Subhash Tanna
  • Patent number: 11801029
    Abstract: A deep learning (DL) convolution neural network (CNN) reduces noise in positron emission tomography (PET) images, and is trained using a range of noise levels for the low-quality images having high noise in the training dataset to produce uniform high-quality images having low noise, independently of the noise level of the input image. The DL-CNN network can be implemented by slicing a three-dimensional (3D) PET image into 2D slices along transaxial, coronal, and sagittal planes, using three separate 2D CNN networks for each respective plane, and averaging the outputs from these three separate 2D CNN networks. Feature-oriented training can be implemented by segmenting each training image into lesion and background regions, and, in the loss function, applying greater weights to voxels in the lesion region. Other medical images (e.g. MRI and CT) can be used to enhance resolution of the PET images and provide partial volume corrections.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: October 31, 2023
    Assignee: CANON MEDICAL SYSTEMS CORPORATION
    Inventors: Chung Chan, Jian Zhou, Evren Asma
  • Patent number: 11786224
    Abstract: Apparatus and methods are described for use with a bodily emission of a subject that is disposed within a toilet bowl. While the bodily emission is disposed within the toilet bowl, light is received from the toilet bowl using one or more light sensors. Using a computer processor, intensities of at least two spectral bands that are within a range of 530 nm to 785 nm are determined, by analyzing the received light, and a ratio of the intensities of the two spectral bands is determined. In response thereto, the computer processor determines that there is a presence of blood within the bodily emission. The computer processor generates an output on an output device, at least partially in response thereto. Other applications are also described.
    Type: Grant
    Filed: August 26, 2021
    Date of Patent: October 17, 2023
    Assignee: OUTSENSE DIAGNOSTICS LTD.
    Inventor: Ishay Attar
  • Patent number: 11786205
    Abstract: A deep learning (DL) convolution neural network (CNN) reduces noise in positron emission tomography (PET) images, and is trained using a range of noise levels for the low-quality images having high noise in the training dataset to produceuniform high-quality images having low noise, independently of the noise level of the input image. The DL-CNN network can be implemented by slicing a three-dimensional (3D) PET image into 2D slices along transaxial, coronal, and sagittal planes, using three separate 2D CNN networks for each respective plane, and averaging the outputs from these three separate 2D CNN networks. Feature-oriented training can be implemented by segmenting each training image into lesion and background regions, and, in the loss function, applying greater weights to voxels in the lesion region. Other medical images (e.g. MRI and CT) can be used to enhance resolution of the PET images and provide partial volume corrections.
    Type: Grant
    Filed: December 17, 2021
    Date of Patent: October 17, 2023
    Assignee: CANON MEDICAL SYSTEMS CORPORATION
    Inventors: Chung Chan, Jian Zhou, Evren Asma
  • Patent number: 11785328
    Abstract: A system for adjusting the pose of a camera relative to a subject in a scene is provided. The system comprises a camera operable to capture images of a scene; an identification unit configured to identify an object of interest in images of the scene; a pose processor configured to obtain a pose of the object of interest in the scene relative to the camera; a scene analyser operable to determine, based on at least one of the obtained pose of the object of interest and images captured by the camera, a scene quality associated with images captured by the camera. A controller is configured to cause the pose of the camera to be adjusted based on a determination that the scene quality of an image captured at a current pose is less than a threshold value. A corresponding device is also provided.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: October 10, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Nigel John Williams, Fabio Cappello, Rajeev Gupta, Mark Jacobus Breugelmans
  • Patent number: 11783577
    Abstract: A computing system includes a processor and a non-transitory, computer-readable medium including instructions that, when executed by the processor, causes the computing system to receive a machine data set; process the machine data set using a trained machine-learned model to generate predicted variety profile index values, and transmit the variety profile index values to a client computing device. A computer-implemented method includes receiving a machine data set; processing the machine data set using a trained machine-learned model to generate predicted variety profile index values, and transmitting the variety profile index values to a client computing device.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: October 10, 2023
    Assignee: ADVANCED AGRILYTICS HOLDINGS, LLC
    Inventors: William Kess Berg, Jon J. Fridgen, Jonathan Michael Bokmeyer, Andrew James Woodyard
  • Patent number: 11783578
    Abstract: A system includes one or more processors; and one or more non-transitory, computer-readable media including instructions that, when executed by the one or more processors, cause the computing system to: receive a machine data set; process the machine data set with a trained machine-learned model to generate predicted variety profile index values; and cause a visualization to be displayed. A computer-implemented method includes receiving a machine data set; processing the machine data set with a trained machine-learned model to generate predicted variety profile index values; and causing a visualization to be displayed. A non-transitory computer-readable medium includes computer-executable instructions that, when executed by one or more processors, cause a computer to: receive a machine data set; process the machine data set with a trained machine-learned model to generate predicted variety profile index values; and cause a visualization to be displayed.
    Type: Grant
    Filed: December 23, 2022
    Date of Patent: October 10, 2023
    Assignee: ADVANCED AGRILYTICS HOLDINGS, LLC
    Inventors: William Kess Berg, Jon J. Fridgen, Jonathan Michael Bokmeyer, Andrew James Woodyard
  • Patent number: 11771077
    Abstract: A farming machine includes one or more image sensors for capturing an image as the farming machine moves through the field. A control system accesses an image captured by the one or more sensors and identifies a distance value associated with each pixel of the image. The distance value corresponds to a distance between a point and an object that the pixel represents. The control system classifies pixels in the image as crop, plant, ground, etc. based on depth information in in the pixels. The control system generates a labelled point cloud using the labels and depth information, and identifies features about the crops, plants, ground, etc. in the point cloud. The control system generates treatment actions based on any of the depth information, visual information, point cloud, and feature values. The control system actuates a treatment mechanism based on the classified pixels.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: October 3, 2023
    Assignee: BLUE RIVER TECHNOLOGY INC.
    Inventors: Chia-Chun Fu, Christopher Grant Padwick, James Patrick Ostrowski