Patents by Inventor Madiha IJAZ

Madiha IJAZ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11955127
    Abstract: An embodiment extracts a set of designated entities and a set of relationships between designated entities from speech content of an audio feed of a plurality of participants of a current web conference using a machine learning model trained to classify parts of speech content. The embodiment generates a list of current action items based on the extracted set of designated entities and relationships between designated entities. The embodiment identifies a first current action item that is an updated version of an ongoing action item on a progress list of ongoing action items from past web conferences. The embodiment also identifies a second current action item that is unrelated to any of the ongoing action items on the progress list. The embodiment updates the progress list to include updates for the first current action item and by adding the second current action item.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: April 9, 2024
    Assignee: KYNDRYL, INC.
    Inventors: Muhammad Ammar Ahmed, Madiha Ijaz, Sreekrishnan Venkateswaran
  • Patent number: 11750671
    Abstract: An embodiment includes identifying which of a plurality of participants of a web conference is an identified participant associated with a selected cluster of a plurality of clusters of audio feed data of an audio feed of the web conference based on a self-introduction in the selected cluster. The embodiment also generates a first preliminary leadership score for the identified participant based on a speaking duration value associated with the identified participant and generates a second preliminary leadership score for the identified participant using a selected video segment as an input for a machine learning classifier model. The embodiment calculates a final leadership score for the identified participant based on the first and second preliminary leadership scores. The final leadership score is representative of a likelihood that the identified participant is a supervisor, and is indicative of the identified participant being a supervisor if it exceeds a designated threshold value.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: September 5, 2023
    Assignee: KYNDRYL, INC.
    Inventors: Muhammad Ammar Ahmed, Madiha Ijaz, Sreekrishnan Venkateswaran
  • Patent number: 11694443
    Abstract: Machine-based video classifying to identify misleading videos by training a model using a video corpus, obtaining a subject video from a content server, generating respective feature vectors of a title, a thumbnail, a description, and a content of the subject video, determining a first semantic similarities between ones of the feature vectors, determining a second semantic similarity between the title of subject video and titles of videos in the misleading video corpus in a same domain as the subject video, determining a third semantic similarity between comments of the subject video and comments of videos in the misleading video corpus in the same domain as the subject video, classifying the subject video using the model and based on the first semantic similarities, the second semantic similarity, and the third semantic similarity, and outputting the classification of the subject video to a user.
    Type: Grant
    Filed: August 21, 2020
    Date of Patent: July 4, 2023
    Assignee: KYNDRYL, INC.
    Inventors: Madiha Ijaz, Muhammad Ammar Ahmed, Sreekrishnan Venkateswaran
  • Publication number: 20220272132
    Abstract: An embodiment includes identifying which of a plurality of participants of a web conference is an identified participant associated with a selected cluster of a plurality of clusters of audio feed data of an audio feed of the web conference based on a self-introduction in the selected cluster. The embodiment also generates a first preliminary leadership score for the identified participant based on a speaking duration value associated with the identified participant and generates a second preliminary leadership score for the identified participant using a selected video segment as an input for a machine learning classifier model. The embodiment calculates a final leadership score for the identified participant based on the first and second preliminary leadership scores. The final leadership score is representative of a likelihood that the identified participant is a supervisor, and is indicative of the identified participant being a supervisor if it exceeds a designated threshold value.
    Type: Application
    Filed: April 8, 2021
    Publication date: August 25, 2022
    Applicant: Kyndryl, Inc.
    Inventors: Muhammad Ammar Ahmed, Madiha Ijaz, Sreekrishnan Venkateswaran
  • Publication number: 20220270612
    Abstract: An embodiment extracts a set of designated entities and a set of relationships between designated entities from speech content of an audio feed of a plurality of participants of a current web conference using a machine learning model trained to classify parts of speech content. The embodiment generates a list of current action items based on the extracted set of designated entities and relationships between designated entities. The embodiment identifies a first current action item that is an updated version of an ongoing action item on a progress list of ongoing action items from past web conferences. The embodiment also identifies a second current action item that is unrelated to any of the ongoing action items on the progress list. The embodiment updates the progress list to include updates for the first current action item and by adding the second current action item.
    Type: Application
    Filed: April 8, 2021
    Publication date: August 25, 2022
    Applicant: Kyndryl, Inc.
    Inventors: Muhammad Ammar Ahmed, Madiha Ijaz, Sreekrishnan Venkateswaran
  • Publication number: 20210397845
    Abstract: Machine-based video classifying to identify misleading videos by training a model using a video corpus, obtaining a subject video from a content server, generating respective feature vectors of a title, a thumbnail, a description, and a content of the subject video, determining a first semantic similarities between ones of the feature vectors, determining a second semantic similarity between the title of subject video and titles of videos in the misleading video corpus in a same domain as the subject video, determining a third semantic similarity between comments of the subject video and comments of videos in the misleading video corpus in the same domain as the subject video, classifying the subject video using the model and based on the first semantic similarities, the second semantic similarity, and the third semantic similarity, and outputting the classification of the subject video to a user.
    Type: Application
    Filed: August 21, 2020
    Publication date: December 23, 2021
    Inventors: Madiha IJAZ, Muhammad Ammar AHMED, Sreekrishnan VENKATESWARAN