Patents Examined by Daniel G. Mariam
  • Patent number: 12046067
    Abstract: Methods and systems for extracting personal data from a sensitive document are provided. The system includes a document prediction module, a cropping module, a denoising module, and an optical character recognition (OCR) module. The document prediction module predicts type of document of the sensitive document using a keypoint matching-based approach and the cropping module extracts document shape and extracts one or more fields comprising text or pictures from the sensitive document. The denoising module prepares the one or more fields for optical character recognition, and the OCR module performs optical character recognition on the denoised one or more fields to detect characters in the one or more fields.
    Type: Grant
    Filed: June 15, 2021
    Date of Patent: July 23, 2024
    Assignee: DATHENA SCIENCE PTE. LTD.
    Inventors: Christopher Muffat, Tetiana Kodliuk
  • Patent number: 12033075
    Abstract: This application relates to use of transformer neural networks to generate dynamic parameters for use in convolutional neural networks. In various embodiments, received image data is encoded and the encoded signal is sent to both a decoder and a transformer neural network. The decoder outputs a decoded data for input into a convolutional neural network. The transformer outputs a set of dynamic parameter values for input into the convolutional neural network. The convolutional neural network may use the decoded data and the set of dynamic parameter values to output instance image data show identifying a number of objects in an image. In various embodiments, the decoded data is also used to generate semantic data. The semantic data may be combined with the instance data to form panoptic image data.
    Type: Grant
    Filed: July 9, 2021
    Date of Patent: July 9, 2024
    Assignee: Apple Inc.
    Inventor: Atila Orhon
  • Patent number: 12020403
    Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: June 25, 2024
    Assignee: Adobe Inc.
    Inventors: Kuldeep Kulkarni, Soumya Dash, Hrituraj Singh, Bholeshwar Khurana, Aniruddha Mahapatra, Abhishek Bhatia
  • Patent number: 12013647
    Abstract: A method provides the steps of receiving an image from a metrology tool, determining individual units of said image and discriminating the units which provide accurate metrology values. The images are obtained by measuring the metrology target at multiple wavelengths. The discrimination between the units, when these units are pixels in said image, is based on calculating a degree of similarity between said units.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: June 18, 2024
    Assignee: ASML NETHERLANDS B.V.
    Inventors: Simon Gijsbert Josephus Mathijssen, Marc Johannes Noot, Kaustuve Bhattacharyya, Arie Jeffrey Den Boef, Grzegorz Grzela, Timothy Dugan Davis, Olger Victor Zwier, Ralph Timotheus Huijgen, Peter David Engblom, Jan-Willem Gemmink
  • Patent number: 12002292
    Abstract: A calibration system and method for online calibration of 3D scan data from multiple viewpoints is provided. The calibration system receives a set of depth scans and a corresponding set of color images of a scene that includes a human-object as part of a foreground of the scene. The calibration system extracts a first three-dimensional (3D) representation of the foreground based on a first depth scan and spatially aligns the extracted first 3D representation with a second 3D representation of the foreground. The first 3D representation and the second 3D representation are associated with a first viewpoint and a second viewpoint, respectively, in a 3D environment. The calibration system updates the spatially aligned first 3D representation based on the set of color images and a set of structural features of the human-object and reconstructs a 3D mesh of the human-object based on the updated first 3D representation of the foreground.
    Type: Grant
    Filed: November 26, 2019
    Date of Patent: June 4, 2024
    Assignee: SONY GROUP CORPORATION
    Inventor: Kendai Furukawa
  • Patent number: 12002227
    Abstract: Devices, systems, and methods are disclosed for partial point cloud registration. In some implementations, a method includes obtaining a first set of three-dimensional (3D) points corresponding to an object in a physical environment, the first set of 3D points having locations in a first coordinate system, obtaining a second set of 3D points corresponding to the object in the physical environment, the second set of 3D points having locations in a second coordinate system, predicting, via a machine learning model, locations of the first set of 3D points in the second coordinate system, and determining transform parameters relating the first set of 3D points and the second set of 3D points based on the predicted location of the first set of 3D points in the second coordinate system.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: June 4, 2024
    Assignee: Apple Inc.
    Inventors: Donghoon Lee, Thorsten Gernoth, Onur C. Hamsici, Shuo Feng
  • Patent number: 11991982
    Abstract: Tools in an automatic milking arrangement are picked up by using a robotic arm (110). The robotic arm (110) moves a camera (130) to an origin location (PC) from which the camera (130) registers three-dimensional image data (Dimg3D) of at least one tool (141, 142, 143, 144). The three-dimensional image data is processed using an image-based object identification algorithm to identify objects in the form of the tools and hoses (152). In response to identifying at least one tool, a respective tool position (PT1, PT3, PT4) is determined for each identified tool based on the origin location (PC) and the three-dimensional image data. Then, a grip device (115) is exclusively controlled to the one or more of the respective tool positions (PT1, PT3, PT4) to perform a pick-up operation. Thus, futile attempts to pick-up non-existing or blocked tools can be avoided.
    Type: Grant
    Filed: October 28, 2019
    Date of Patent: May 28, 2024
    Assignee: DeLaval Holding AB
    Inventor: Andreas Eriksson
  • Patent number: 11995908
    Abstract: An information processing device includes a processor configured to acquire a document image illustrating a document, acquire a related character string associated with a target character string included in the document image, and extract target information corresponding to the target character string from a region set with reference to a position of the related character string in the document image.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: May 28, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Fumi Kosaka, Akinobu Yamaguchi, Junichi Shimizu, Shinya Nakamura, Jun Ando, Masanori Yoshizuka, Akane Abe
  • Patent number: 11989262
    Abstract: Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: May 21, 2024
    Assignee: Nvidia Corporation
    Inventors: David Acuna Marrero, Guojun Zhang, Marc Law, Sanja Fidler
  • Patent number: 11989889
    Abstract: A method for determining a movement of a device relative to at least one object based a digital image sequence of the object recorded from the location of the device. The method includes computing a plurality of optical flow fields from image pairs of the digital image sequence; finding the position of an object in a partial image region in the most current image in each case and assigning the partial image region to the object; forming a plurality of partial optical flow fields from the plurality of optical flow fields; selecting a partial flow fields from the plurality of partial flow fields in accordance with at least one criterion to facilitate the estimation of a change in scale of the object; and estimating the change in scale for the at least one object using the assigned partial image region based on the selected partial flow field.
    Type: Grant
    Filed: October 29, 2019
    Date of Patent: May 21, 2024
    Assignee: ROBERT BOSCH GMBH
    Inventors: Alexander Lengsfeld, Joern Jachalsky, Marcel Brueckner, Philip Lenz
  • Patent number: 11983893
    Abstract: Systems and methods for hybrid depth regularization in accordance with various embodiments of the invention are disclosed. In one embodiment of the invention, a depth sensing system comprises a plurality of cameras; a processor; and a memory containing an image processing application. The image processing application may direct the processor to obtain image data for a plurality of images from multiple viewpoints, the image data comprising a reference image and at least one alternate view image; generate a raw depth map using a first depth estimation process, and a confidence map; and generate a regularized depth map. The regularized depth map may be generated by computing a secondary depth map using a second different depth estimation process; and computing a composite depth map by selecting depth estimates from the raw depth map and the secondary depth map based on the confidence map.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: May 14, 2024
    Assignee: Adeia Imaging LLC
    Inventors: Ankit Jain, Priyam Chatterjee, Kartik Venkataraman
  • Patent number: 11983946
    Abstract: In implementations of refining element associations for form structure extraction, a computing device implements a structure system to receive estimate data describing estimated associations of elements included in a form and a digital image depicting the form. An image patch is extracted from the digital image, and the image patch depicts a pair of elements of the elements included in the form. The structure system encodes an indication of whether the pair of elements have an association of the estimated associations. An indication is generated that the pair of elements have a particular association based at least partially on the encoded indication, bounding boxes of the pair of elements, and text depicted in the image patch.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: May 14, 2024
    Assignee: Adobe Inc.
    Inventors: Shripad Deshmukh, Milan Aggarwal, Mausoom Sarkar, Hiresh Gupta
  • Patent number: 11982878
    Abstract: The local refractive power or the refractive power distribution of a spectacle lens is measured. A first image of a scene having a plurality of structure points and a left and/or a right spectacle lens of a frame front is captured with an image capturing device from a first capture position having an imaging beam path for structure points, which extends through the spectacle lens of the frame front. At least two further images of the scene are captured with the image capturing device from different capture positions, one of which can be identical with the first capture position, without the spectacle lenses of the spectacles or without the frame front containing the spectacle lenses having the structure points imaged in the first image, and the coordinates of the structure points in a coordinate system are calculated from the at least two further images of the scene by image analysis.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: May 14, 2024
    Assignee: Carl Zeiss Vision International GmbH
    Inventor: Carsten Glasenapp
  • Patent number: 11978211
    Abstract: A phase image is formed by calculation from a hologram image of a cell, and segmentation is performed for each pixel for the phase image using a fully convolution neural network to identify an undifferentiated cell region, a deviated cell region, a foreign substance region, and the like. When learning, when a learning image included in a mini-batch is read, the image is randomly inverted vertically or horizontally and then is rotated by a random angle. A part that has been lost within the frame by the pre-rotation image is compensated for by a mirror-image inversion with an edge of a post-rotation image as an axis thereof. Learning of a fully convolution neural network is performed using the generated learning image. The same processing is repeated for all mini-batches, and the learning is repeated by a predetermined number of times while shuffling the training data allocated to the mini-batch. The precision of the learning model is thus improved.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: May 7, 2024
    Assignee: SHIMADZU CORPORATION
    Inventors: Wataru Takahashi, Ayako Akazawa
  • Patent number: 11978273
    Abstract: Systems and techniques are provided for automatically analyzing and processing domain-specific image artifacts and document images. A process can include obtaining a plurality of document images comprising visual representations of structured text. An OCR-free machine learning model can be trained to automatically extract text data values from different types or classes of document image, based on using a corresponding region of interest (ROI) template corresponding to the structure of the document image type for at least initial rounds of annotations and training. The extracted information included in an inference prediction of the trained OCR-free machine learning model can be reviewed and validated or corrected correspondingly before being written to a database for use by one or more downstream analytical tasks.
    Type: Grant
    Filed: November 10, 2023
    Date of Patent: May 7, 2024
    Assignee: 32Health Inc.
    Inventors: Deepak Ramaswamy, Ravindra Kompella, Shaju Puthussery
  • Patent number: 11967166
    Abstract: The present disclosure provides a method and system architectures for carrying out the method of automated marine life object classification and identification utilising a core of a Deep Neural Network, DNN, to facilitate the operations of a post-processing module subnetwork such as instance segmentation, masking, labelling, and image overlay of an input image determined to contain one or more target marine life objects. Multiple instances of target objects from the same image data can be easily classified and labelled for post-processing through application of a masking layer over each respective object by a semantic segmentation network.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: April 23, 2024
    Inventors: Tianye Wang, Shiwei Liu, Xiaoge Cheng
  • Patent number: 11967095
    Abstract: This image processing system is provided with: a measurement part which measures the three-dimensional shape of a target object based on a captured image obtained by capturing an image of the target object; a reliability calculation part which calculates, for each area, an index that indicates the reliability in the measurement of the three-dimensional shape; a reliability evaluation part which evaluates, for each area, whether the calculated index satisfies a predetermined criterion; and a display part which simultaneously or selectively displays the measurement result of the three-dimensional shape and a result image that shows the area that does not satisfy the criterion in the captured image.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: April 23, 2024
    Assignee: OMRON Corporation
    Inventor: Motoharu Okuno
  • Patent number: 11961318
    Abstract: An information processing device includes a processor configured to acquire a document image illustrating a document, acquire a related character string associated with a target character string included in the document image, and extract target information corresponding to the target character string from a region set with reference to a position of the related character string in the document image.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: April 16, 2024
    Assignee: FUJIFILM Business Innovation Corp.
    Inventors: Fumi Kosaka, Akinobu Yamaguchi, Junichi Shimizu, Shinya Nakamura, Jun Ando, Masanori Yoshizuka, Akane Abe
  • Patent number: 11954988
    Abstract: A system and method for image processing for wildlife detection is provided which consists of object detection and object classification. An image capturing means capture one or more images. The image taken is converted to greyscale and re-sized and passed on to a Deep Neural Network (DNN). The image classification is executed by a processor via the Deep Neural Network in two steps. The second step is carried by a custom Convolutional Neural Network (CNN). The CNN classifies the detected object with certain parameters. After classifying a particular animal species in the captured image, it sends notifications, SMS, alerts to the surrounding neighbours. For a correct image classification, the feedback data is sent to the CNN for further re-training. Periodic retraining of the model with the images captured as part of the system execution adapts the system to a specific area being monitored and the wildlife in that area.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: April 9, 2024
    Inventor: Vivek Satya Bharati
  • Patent number: 11954903
    Abstract: This application relates to a system for automatically recognizing geographical area information provided on an item. The system may include an optical scanner configured to capture geographical area information provided on an item, the geographical area information comprising a plurality of geographical area components. The system may also include a controller in data communication with the optical scanner and configured to recognize the captured geographical area information by running a plurality of machine learning or deep learning models separately and sequentially on the plurality of geographical area components of the captured geographical area information.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: April 9, 2024
    Assignee: United States Postal Service
    Inventor: Ryan J. Simpson