Patents Examined by Daniel G. Mariam
  • Patent number: 12002227
    Abstract: Devices, systems, and methods are disclosed for partial point cloud registration. In some implementations, a method includes obtaining a first set of three-dimensional (3D) points corresponding to an object in a physical environment, the first set of 3D points having locations in a first coordinate system, obtaining a second set of 3D points corresponding to the object in the physical environment, the second set of 3D points having locations in a second coordinate system, predicting, via a machine learning model, locations of the first set of 3D points in the second coordinate system, and determining transform parameters relating the first set of 3D points and the second set of 3D points based on the predicted location of the first set of 3D points in the second coordinate system.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: June 4, 2024
    Assignee: Apple Inc.
    Inventors: Donghoon Lee, Thorsten Gernoth, Onur C. Hamsici, Shuo Feng
  • Patent number: 12002292
    Abstract: A calibration system and method for online calibration of 3D scan data from multiple viewpoints is provided. The calibration system receives a set of depth scans and a corresponding set of color images of a scene that includes a human-object as part of a foreground of the scene. The calibration system extracts a first three-dimensional (3D) representation of the foreground based on a first depth scan and spatially aligns the extracted first 3D representation with a second 3D representation of the foreground. The first 3D representation and the second 3D representation are associated with a first viewpoint and a second viewpoint, respectively, in a 3D environment. The calibration system updates the spatially aligned first 3D representation based on the set of color images and a set of structural features of the human-object and reconstructs a 3D mesh of the human-object based on the updated first 3D representation of the foreground.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: June 4, 2024
    Assignee: SONY GROUP CORPORATION
    Inventor: Kendai Furukawa
  • Patent number: 11991982
    Abstract: Tools in an automatic milking arrangement are picked up by using a robotic arm (110). The robotic arm (110) moves a camera (130) to an origin location (PC) from which the camera (130) registers three-dimensional image data (Dimg3D) of at least one tool (141, 142, 143, 144). The three-dimensional image data is processed using an image-based object identification algorithm to identify objects in the form of the tools and hoses (152). In response to identifying at least one tool, a respective tool position (PT1, PT3, PT4) is determined for each identified tool based on the origin location (PC) and the three-dimensional image data. Then, a grip device (115) is exclusively controlled to the one or more of the respective tool positions (PT1, PT3, PT4) to perform a pick-up operation. Thus, futile attempts to pick-up non-existing or blocked tools can be avoided.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: May 28, 2024
    Assignee: DeLaval Holding AB
    Inventor: Andreas Eriksson
  • Patent number: 11995908
    Abstract: An information processing device includes a processor configured to acquire a document image illustrating a document, acquire a related character string associated with a target character string included in the document image, and extract target information corresponding to the target character string from a region set with reference to a position of the related character string in the document image.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: May 28, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Fumi Kosaka, Akinobu Yamaguchi, Junichi Shimizu, Shinya Nakamura, Jun Ando, Masanori Yoshizuka, Akane Abe
  • Patent number: 11989262
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: May 21, 2024
    Assignee: Nvidia Corporation
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler
  • Patent number: 11989889
    Abstract: A method for determining a movement of a device relative to at least one object based a digital image sequence of the object recorded from the location of the device. The method includes computing a plurality of optical flow fields from image pairs of the digital image sequence; finding the position of an object in a partial image region in the most current image in each case and assigning the partial image region to the object; forming a plurality of partial optical flow fields from the plurality of optical flow fields; selecting a partial flow fields from the plurality of partial flow fields in accordance with at least one criterion to facilitate the estimation of a change in scale of the object; and estimating the change in scale for the at least one object using the assigned partial image region based on the selected partial flow field.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 21, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Alexander Lengsfeld, Joern Jachalsky, Marcel Brueckner, Philip Lenz
  • Patent number: 11982878
    Abstract: The local refractive power or the refractive power distribution of a spectacle lens is measured. A first image of a scene having a plurality of structure points and a left and/or a right spectacle lens of a frame front is captured with an image capturing device from a first capture position having an imaging beam path for structure points, which extends through the spectacle lens of the frame front. At least two further images of the scene are captured with the image capturing device from different capture positions, one of which can be identical with the first capture position, without the spectacle lenses of the spectacles or without the frame front containing the spectacle lenses having the structure points imaged in the first image, and the coordinates of the structure points in a coordinate system are calculated from the at least two further images of the scene by image analysis.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: May 14, 2024
    Assignee: Carl Zeiss Vision International GmbH
    Inventor: Carsten Glasenapp
  • Patent number: 11983893
    Abstract: Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: May 14, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Ankit Jain, Priyam Chatterjee, Kartik Venkataraman
  • Patent number: 11983946
    Abstract: In implementations of refining element associations for form structure extraction, a computing device implements a structure system to receive estimate data describing estimated associations of elements included in a form and a digital image depicting the form. An image patch is extracted from the digital image, and the image patch depicts a pair of elements of the elements included in the form. The structure system encodes an indication of whether the pair of elements have an association of the estimated associations. An indication is generated that the pair of elements have a particular association based at least partially on the encoded indication, bounding boxes of the pair of elements, and text depicted in the image patch.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: May 14, 2024
    Assignee: Adobe Inc.
    Inventors: Shripad Deshmukh, Milan Aggarwal, Mausoom Sarkar, Hiresh Gupta
  • Patent number: 11978211
    Abstract: A phase image is formed by calculation from a hologram image of a cell, and segmentation is performed for each pixel for the phase image using a fully convolution neural network to identify an undifferentiated cell region, a deviated cell region, a foreign substance region, and the like. When learning, when a learning image included in a mini-batch is read, the image is randomly inverted vertically or horizontally and then is rotated by a random angle. A part that has been lost within the frame by the pre-rotation image is compensated for by a mirror-image inversion with an edge of a post-rotation image as an axis thereof. Learning of a fully convolution neural network is performed using the generated learning image. The same processing is repeated for all mini-batches, and the learning is repeated by a predetermined number of times while shuffling the training data allocated to the mini-batch. The precision of the learning model is thus improved.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: May 7, 2024
    Assignee: SHIMADZU CORPORATION
    Inventors: Wataru Takahashi, Ayako Akazawa
  • Patent number: 11978273
    Abstract: Systems and techniques are provided for automatically analyzing and processing domain-specific image artifacts and document images. A process can include obtaining a plurality of document images comprising visual representations of structured text. An OCR-free machine learning model can be trained to automatically extract text data values from different types or classes of document image, based on using a corresponding region of interest (ROI) template corresponding to the structure of the document image type for at least initial rounds of annotations and training. The extracted information included in an inference prediction of the trained OCR-free machine learning model can be reviewed and validated or corrected correspondingly before being written to a database for use by one or more downstream analytical tasks.
    Type: Grant
    Filed: November 10, 2023
    Date of Patent: May 7, 2024
    Assignee: 32Health Inc.
    Inventors: Deepak Ramaswamy, Ravindra Kompella, Shaju Puthussery
  • Patent number: 11967095
    Abstract: This image processing system is provided with: a measurement part which measures the three-dimensional shape of a target object based on a captured image obtained by capturing an image of the target object; a reliability calculation part which calculates, for each area, an index that indicates the reliability in the measurement of the three-dimensional shape; a reliability evaluation part which evaluates, for each area, whether the calculated index satisfies a predetermined criterion; and a display part which simultaneously or selectively displays the measurement result of the three-dimensional shape and a result image that shows the area that does not satisfy the criterion in the captured image.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: April 23, 2024
    Assignee: OMRON Corporation
    Inventor: Motoharu Okuno
  • Patent number: 11967166
    Abstract: The present disclosure provides a method and system architectures for carrying out the method of automated marine life object classification and identification utilising a core of a Deep Neural Network, DNN, to facilitate the operations of a post-processing module subnetwork such as instance segmentation, masking, labelling, and image overlay of an input image determined to contain one or more target marine life objects. Multiple instances of target objects from the same image data can be easily classified and labelled for post-processing through application of a masking layer over each respective object by a semantic segmentation network.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: April 23, 2024
    Inventors: Tianye Wang, Shiwei Liu, Xiaoge Cheng
  • Patent number: 11961318
    Abstract: An information processing device includes a processor configured to acquire a document image illustrating a document, acquire a related character string associated with a target character string included in the document image, and extract target information corresponding to the target character string from a region set with reference to a position of the related character string in the document image.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: April 16, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Fumi Kosaka, Akinobu Yamaguchi, Junichi Shimizu, Shinya Nakamura, Jun Ando, Masanori Yoshizuka, Akane Abe
  • Patent number: 11954903
    Abstract: This application relates to a system for automatically recognizing geographical area information provided on an item. The system may include an optical scanner configured to capture geographical area information provided on an item, the geographical area information comprising a plurality of geographical area components. The system may also include a controller in data communication with the optical scanner and configured to recognize the captured geographical area information by running a plurality of machine learning or deep learning models separately and sequentially on the plurality of geographical area components of the captured geographical area information.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: April 9, 2024
    Assignee: United States Postal Service
    Inventor: Ryan J. Simpson
  • Patent number: 11954988
    Abstract: A system and method for image processing for wildlife detection is provided which consists of object detection and object classification. An image capturing means capture one or more images. The image taken is converted to greyscale and re-sized and passed on to a Deep Neural Network (DNN). The image classification is executed by a processor via the Deep Neural Network in two steps. The second step is carried by a custom Convolutional Neural Network (CNN). The CNN classifies the detected object with certain parameters. After classifying a particular animal species in the captured image, it sends notifications, SMS, alerts to the surrounding neighbours. For a correct image classification, the feedback data is sent to the CNN for further re-training. Periodic retraining of the model with the images captured as part of the system execution adapts the system to a specific area being monitored and the wildlife in that area.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: April 9, 2024
    Inventor: Vivek Satya Bharati
  • Patent number: 11948228
    Abstract: A color correction method for a panoramic image comprises: acquiring a first and second fisheye images; expanding the first fisheye image and the second fisheye image respectively to obtain a first image and a second image in an RGB color space; calculating overlapping areas between the images; converting the first image and the second image from the RGB color space to a Lab color space; in the Lab color space, adjusting the brightness value of the first image and the brightness value of the second image; converting the first image and the second image from the Lab color space to the RGB color space; according to the mean color values of a first and second overlapping areas, adjusting the color value of the second image by using the first image as a reference, or adjusting the color value of the first image by using the second image as a reference.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: April 2, 2024
    Assignee: ARASHI VISION INC.
    Inventor: Chenglong Yin
  • Patent number: 11948387
    Abstract: Systems and methods for training an object detection network are described. Embodiments train an object detection network using a labeled training set, wherein each element of the labeled training set includes an image and ground truth labels for object instances in the image, predict annotation data for a candidate set of unlabeled data using the object detection network, select a sample image from the candidate set using a policy network, generate a labeled sample based on the selected sample image and the annotation data, wherein the labeled sample includes labels for a plurality of object instances in the sample image, and perform additional training on the object detection network based at least in part on the labeled sample.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: April 2, 2024
    Assignee: ADOBE INC.
    Inventors: Sumit Shekhar, Bhanu Prakash Reddy Guda, Ashutosh Chaubey, Ishan Jindal, Avneet Jain
  • Patent number: 11948384
    Abstract: The present disclosure is directed to systems and methods that enable scanning of any type of card regardless of the shape and design of a given card and/or a font, a shape and a format with which characters such as numbers, letters and symbols are printed on the cards including cards with non-embossed characters printed thereon. In one example, a method includes scanning a card, the card including at least an account number associated with a user of the card and an identifier of the user; detecting, by applying a machine learning model to the card after scanning the card, at least the account number printed on the card; and completing a task using the account number.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: April 2, 2024
    Assignee: Synchrony Bank
    Inventors: Brian Yang, Michael Storiale
  • Patent number: 11941918
    Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: March 26, 2024
    Assignee: Yoti Holding Limited
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber