Patents by Inventor Bisrat Zerihun

Bisrat Zerihun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230325442
    Abstract: Systems and methods for automatic generation of free-form conversational interfaces are disclosed. In one embodiment, a system receives an input from a user device through a conversational graphical user interface (GUI). An intent of the user may be determined based on the received input. Based on the intent of the user, the system may identify, from a plurality of objects available to the system, one or more objects. Each of the plurality of objects has annotations corresponding to one or more elements of the object and one or more functions of the object. The one or more functions corresponding to the one or more elements are executable to perform an action upon corresponding elements. Based on the identified one or more objects and the annotations of the identified one or more objects, the system may generate a dynamic dialogue flow for the conversational GUI, where the dynamic dialogue flow is generated in real-time during a conversational GUI session.
    Type: Application
    Filed: April 16, 2023
    Publication date: October 12, 2023
    Inventors: Karl Anton Hennig, Ajay Aswal, Bisrat Zerihun
  • Patent number: 11704791
    Abstract: Machine learning technologies are used to identify and separating abnormal and normal subjects and identifying possible disease types with images (e.g., optical coherence tomography (OCT) images of the eye), where the machine learning technologies are trained with only normative data. In one example, a feature or a physiological structure of an image is extracted, and the image is classified based on the extracted feature. In another example, a region of the image is masked and then reconstructed, and a similarity is determined between the reconstructed region and the original region of the image. A label (indicating an abnormality) and a score (indicating a severity) can be determined based on the classification and/or the similarity.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: July 18, 2023
    Assignee: TOPCON CORPORATION
    Inventors: Qi Yang, Bisrat Zerihun, Charles A. Reisman
  • Patent number: 11657096
    Abstract: Systems and methods for automatic generation of free-form conversational interfaces are disclosed. In one embodiment, a system receives an input from a user device through a conversational graphical user interface (GUI). An intent of the user may be determined based on the received input. Based on the intent of the user, the system may identify, from a plurality of objects available to the system, one or more objects. Each of the plurality of objects has annotations corresponding to one or more elements of the object and one or more functions of the object. The one or more functions corresponding to the one or more elements are executable to perform an action upon corresponding elements. Based on the identified one or more objects and the annotations of the identified one or more objects, the system may generate a dynamic dialogue flow for the conversational GUI, where the dynamic dialogue flow is generated in real-time during a conversational GUI session.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: May 23, 2023
    Assignee: PAYPAL, INC.
    Inventors: Karl Anton Hennig, Ajay Aswal, Bisrat Zerihun
  • Publication number: 20220328050
    Abstract: Techniques for detecting a fraudulent attempt by an adversarial user to voice verify as a user are presented. An authenticator component can determine characteristics of voice information received in connection with a user account based on analysis of the voice information. In response to determining the characteristics sufficiently match characteristics of a voice print associated with the user account, authenticator component can determine a similarity score based on comparing the characteristics of the voice information and other characteristics of a set of previously stored voice prints associated with the user account. Authenticator component can determine whether the similarity score is higher than a threshold similarity score to indicate whether the voice information is a replay of a recording or a deep fake emulation of the voice of the user. Above the threshold can indicate the voice information is fraudulent, and below the threshold can indicate the voice information is valid.
    Type: Application
    Filed: April 12, 2021
    Publication date: October 13, 2022
    Inventors: Karl Anton Hennig, Ajay Aswal, Bisrat Zerihun
  • Publication number: 20220197952
    Abstract: Systems and methods for automatic generation of free-form conversational interfaces are disclosed. In one embodiment, a system receives an input from a user device through a conversational graphical user interface (GUI). An intent of the user may be determined based on the received input. Based on the intent of the user, the system may identify, from a plurality of objects available to the system, one or more objects. Each of the plurality of objects has annotations corresponding to one or more elements of the object and one or more functions of the object. The one or more functions corresponding to the one or more elements are executable to perform an action upon corresponding elements. Based on the identified one or more objects and the annotations of the identified one or more objects, the system may generate a dynamic dialogue flow for the conversational GUI, where the dynamic dialogue flow is generated in real-time during a conversational GUI session.
    Type: Application
    Filed: December 18, 2020
    Publication date: June 23, 2022
    Inventors: Karl Anton Hennig, Ajay Aswal, Bisrat Zerihun
  • Publication number: 20200074622
    Abstract: Machine learning technologies are used to identify and separating abnormal and normal subjects and identifying possible disease types with images (e.g., optical coherence tomography (OCT) images of the eye), where the machine learning technologies are trained with only normative data. In one example, a feature or a physiological structure of an image is extracted, and the image is classified based on the extracted feature. In another example, a region of the image is masked and then reconstructed, and a similarity is determined between the reconstructed region and the original region of the image. A label (indicating an abnormality) and a score (indicating a severity) can be determined based on the classification and/or the similarity.
    Type: Application
    Filed: August 27, 2019
    Publication date: March 5, 2020
    Inventors: Qi Yang, Bisrat Zerihun, Charles A. Reisman