Patents by Inventor Archy DE BERKER

Archy DE BERKER has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11875114
    Abstract: A method for extracting information from a document, comprising: receiving an identification of an entity to be extracted from the document; identifying candidates from the document, each candidate corresponding to a given element contained in the document and having a given location within the document; embedding the candidates, thereby obtaining an embedding vector for each candidate; for each candidate, comparing in a semantic space the respective embedding vector to previous embedding vectors associated with previous entity values previously chosen for the entity, thereby obtaining a first comparison result; for each candidate, comparing in a pixel space the given location within the document of the candidate to a location associated with the previous entity values previously chosen for the entity, thereby obtaining a second comparison result; sorting the candidates using the first and second comparison results obtained for each candidate, thereby obtaining sorted candidates; and outputting the sorted can
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: January 16, 2024
    Assignee: ServiceNow Canada Inc.
    Inventors: Archy De Berker, Simon Lemieux
  • Patent number: 11481605
    Abstract: There is provided a 2D document extractor for extracting entities from a structured document, the 2D document extractor includes a first convolutional neural network (CNN), a second CNN, and a third recurrent neural network (RNN). A plurality of text sequences and structural elements indicative of location of the text sequences in the document are received. The first CNN encodes the text sequences and structural elements to obtain a 3D encoded image indicative of semantic characteristics of the text sequences and having the structure of the document. The second CNN compresses the 3D encoded image to obtain a feature vector, the feature vector being indicative of a combination of spatial characteristics and semantic characteristics of the 3D encoded image. The third RNN decodes the feature vector to extract the text entities, a given text entity being associated with a text sequence.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: October 25, 2022
    Assignee: ServiceNow Canada Inc.
    Inventors: Olivier Nguyen, Archy De Berker, Eniola Alese, Majid Laali
  • Publication number: 20210124874
    Abstract: A method for extracting information from a document, comprising: receiving an identification of an entity to be extracted from the document; identifying candidates from the document, each candidate corresponding to a given element contained in the document and having a given location within the document; embedding the candidates, thereby obtaining an embedding vector for each candidate; for each candidate, comparing in a semantic space the respective embedding vector to previous embedding vectors associated with previous entity values previously chosen for the entity, thereby obtaining a first comparison result; for each candidate, comparing in a pixel space the given location within the document of the candidate to a location associated with the previous entity values previously chosen for the entity, thereby obtaining a second comparison result; sorting the candidates using the first and second comparison results obtained for each candidate, thereby obtaining sorted candidates; and outputting the sorted can
    Type: Application
    Filed: October 23, 2020
    Publication date: April 29, 2021
    Inventors: Archy DE BERKER, Simon LEMIEUX
  • Publication number: 20210125034
    Abstract: There is provided a 2D document extractor for extracting entities from a structured document, the 2D document extractor includes a first convolutional neural network (CNN), a second CNN, and a third recurrent neural network (RNN). A plurality of text sequences and structural elements indicative of location of the text sequences in the document are received. The first CNN encodes the text sequences and structural elements to obtain a 3D encoded image indicative of semantic characteristics of the text sequences and having the structure of the document. The second CNN compresses the 3D encoded image to obtain a feature vector, the feature vector being indicative of a combination of spatial characteristics and semantic characteristics of the 3D encoded image. The third RNN decodes the feature vector to extract the text entities, a given text entity being associated with a text sequence.
    Type: Application
    Filed: October 25, 2019
    Publication date: April 29, 2021
    Applicant: Element Al Inc.
    Inventors: Olivier NGUYEN, Archy De Berker, Eniola Alese, Majid Laali