Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting text from an input document to generate one or more inference. Each inference box may be input into a machine learning network trained on training labels. Each training label provides a human-augmented version of output from a separate machine translation engine. A first translation may be generated by machine learning network. The first translation may be displayed in a user interface with respect to display of an original version of the input document and a translated version of a portion of the input document.
Type:
Grant
Filed:
January 3, 2022
Date of Patent:
May 2, 2023
Assignee:
Vannevar Labs, Inc.
Inventors:
Daniel Goodman, Nathanial Hartman, Nathaniel Bush, Brett Granberg
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extracting text from an input document to generate one or more inference. Each inference box may be input into a machine learning network trained on training labels. Each training label provides a human-augmented version of output from a separate machine translation engine. A first translation may be generated by machine learning network. The first translation may be displayed in a user interface with respect to display of an original version of the input document and a translated version of a portion of the input document.
Type:
Grant
Filed:
April 29, 2021
Date of Patent:
January 4, 2022
Assignee:
Vannevar Labs, Inc.
Inventors:
Daniel Goodman, Nathanial Hartman, Nathaniel Bush, Brett Granberg
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for neural network-based optical character recognition. An embodiment of the system may generate a set of bounding boxes based on reshaped image portions that correspond to image data of a source image. The system may merge any intersecting bounding boxes into a merged bounding box to generate a set of merged bounding boxes indicative of image data portions that likely portray one or more words. Each merged bounding box may be fed by the system into a neural network to identify one or more words of the source image represented in the respective merged bounding box. The one or more identified words may be displayed by the system according to a standardized font and a confidence score.