Patents by Inventor Omar URIBE

Omar URIBE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12334221
    Abstract: An automated and quantitative facial weakness screening framework that utilizes a Bi-LSTM network to model the temporal dynamics among the shape and appearance features. The technique is beneficial to assist the paramedics or other users to identify the facial weakness in the field or, more importantly, whenever expertise in neurology is not available either for emergency patient triage (e.g., pre-hospital stroke care) or chronic disease management (e.g., Bell's palsy rehabilitation screen), leading to increased coverage and earlier treatment. The technique provides visualizable and interpretable results to increase its transparency and interpretability. The technique provides for inexpensive solutions that can be used in areas underserved by non-neurologists to more readily identify neurological deficits such as facial weakness in the field or other environment.
    Type: Grant
    Filed: February 4, 2022
    Date of Patent: June 17, 2025
    Assignee: University of Virginia Patent Foundation
    Inventors: Gustavo Rohde, Andrew M. Southerland, Yan Zhuang, Mark McDonald, Omar Uribe, Chad M. Aldridge, Mohamed Abul Hassan
  • Patent number: 11540749
    Abstract: The disclosed embodiments provide systems and methods for predicting presence of one or more neurological deficits. The system may include a microphone, a camera, one or more memory devices storing instructions, and one or more processors configured to execute the instructions to extract audio information including a period density entropy coefficient and a mel frequency cepstral coefficient from an audio feed received from the microphone. Additionally, the instructions may cause the processor to determine position and depth information of eye movement from a video feed received from the camera and detect features of interest including facial landmarks, spatial orientation of limbs, and positional information of limb movements from the video feed. The one or more processors may further extract the features of interest from the video feed and process the extracted features of interest by aligning the extracted features of interest to a common reference.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: January 3, 2023
    Assignee: University of Virginia Patent Foundation
    Inventors: Omar Uribe, Mark McDonald, Andrew M. Southerland, Gustavo Rohde, Yan Zhuang
  • Publication number: 20220319707
    Abstract: An automated and quantitative facial weakness screening framework that utilizes a Bi-LSTM network to model the temporal dynamics among the shape and appearance features. The technique is beneficial to assist the paramedics or other users to identify the facial weakness in the field or, more importantly, whenever expertise in neurology is not available either for emergency patient triage (e.g., pre-hospital stroke care) or chronic disease management (e.g., Bell's palsy rehabilitation screen), leading to increased coverage and earlier treatment. The technique provides visualizable and interpretable results to increase its transparency and interpretability. The technique provides for inexpensive solutions that can be used in areas underserved by non-neurologists to more readily identify neurological deficits such as facial weakness in the field or other environment.
    Type: Application
    Filed: February 4, 2022
    Publication date: October 6, 2022
    Applicant: University of Virginia Patent Foundation
    Inventors: Gustavo Rohde, Andrew M. Southerland, Yan Zhuang, Mark McDonald, Omar Uribe, Chad M. Aldridge, Mohamed Abul Hassan
  • Publication number: 20210093231
    Abstract: The disclosed embodiments provide systems and methods for predicting presence of one or more neurological deficits. The system may include a microphone, a camera, one or more memory devices storing instructions, and one or more processors configured to execute the instructions to extract audio information including a period density entropy coefficient and a mel frequency cepstral coefficient from an audio feed received from the microphone. Additionally, the instructions may cause the processor to determine position and depth information of eye movement from a video feed received from the camera and detect features of interest including facial landmarks, spatial orientation of limbs, and positional information of limb movements from the video feed. The one or more processors may further extract the features of interest from the video feed and process the extracted features of interest by aligning the extracted features of interest to a common reference.
    Type: Application
    Filed: January 22, 2019
    Publication date: April 1, 2021
    Inventors: Omar URIBE, Mark MCDONALD, Andrew M. SOUTHERLAND, Gustavo ROHDE, Yan ZHUANG