Patents by Inventor Subhradeep KAYAL

Subhradeep KAYAL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10740560
    Abstract: Systems and methods of extracting funding information from text are disclosed herein. The method includes receiving a text document, extracting paragraphs from the text document using a natural language processing model or a machine learning model, and classifying, using a machine learning classifier, the paragraphs as having funding information or not having funding information. The method further includes labeling, using a first annotator, potential entities within the paragraphs classified as having funding information, and labeling, using a second annotator, potential entities within the paragraphs classified as having funding information, where the first annotator implements a first named-entity recognition model and the second annotator implements a second named-entity recognition model that is different from the first named-entity recognition model.
    Type: Grant
    Filed: June 27, 2018
    Date of Patent: August 11, 2020
    Assignee: Elsevier, Inc.
    Inventors: Michelle Gregory, Subhradeep Kayal, Georgios Tsatsaronis, Zubair Afzal
  • Publication number: 20190005020
    Abstract: Systems and methods of extracting funding information from text are disclosed herein. The method includes receiving a text document, extracting paragraphs from the text document using a natural language processing model or a machine learning model, and classifying, using a machine learning classifier, the paragraphs as having funding information or not having funding information. The method further includes labeling, using a first annotator, potential entities within the paragraphs classified as having funding information, and labeling, using a second annotator, potential entities within the paragraphs classified as having funding information, where the first annotator implements a first named-entity recognition model and the second annotator implements a second named-entity recognition model that is different from the first named-entity recognition model.
    Type: Application
    Filed: June 27, 2018
    Publication date: January 3, 2019
    Applicant: Elsevier, Inc.
    Inventors: Michelle Gregory, Subhradeep Kayal, Georgios Tsatsaronis, Zubair Afzal
  • Patent number: 9471912
    Abstract: Electronic system for obtaining data, via one or more digital devices, on user behavior, digital transactions, and exposure relative to digital content and services, or external exposure and associated events between the user and the environment via sensors attached to digital devices, the system being configured to collect data reflecting the content and objects that the user at least potentially perceives as rendered on one or more digital screens attached to smart devices, reconstruct the at least potentially perceived visual landscape based on the collected data, and determine the target and/or level of user attention in view of the reconstruction and associated exposure events detected therein, and to apply locally stored information about rules or fingerprints in the digital object recognition process involving the collected data and validation of the type or identity of user actions, digital content, or external objects, as reflected by the reconstruction recapturing the visual landscape.
    Type: Grant
    Filed: February 6, 2014
    Date of Patent: October 18, 2016
    Assignee: VERTO ANALYTICS OY
    Inventors: Hannu Verkasalo, Subhradeep Kayal, Matias Kontturi, Eric Malmi
  • Publication number: 20150220814
    Abstract: Electronic system for obtaining data, via one or more digital devices, on user behavior, digital transactions, and exposure relative to digital content and services, or external exposure and associated events between the user and the environment via sensors attached to digital devices, the system being configured to collect data reflecting the content and objects that the user at least potentially perceives as rendered on one or more digital screens attached to smart devices, reconstruct the at least potentially perceived visual landscape based on the collected data, and determine the target and/or level of user attention in view of the reconstruction and associated exposure events detected therein, and to apply locally stored information about rules or fingerprints in the digital object recognition process involving the collected data and validation of the type or identity of user actions, digital content, or external objects, as reflected by the reconstruction recapturing the visual landscape.
    Type: Application
    Filed: February 6, 2014
    Publication date: August 6, 2015
    Applicant: Verto Analytics Oy
    Inventors: Hannu VERKASALO, Subhradeep KAYAL, Matias KONTTURI, Eric MALMI