Patents by Inventor Mark P. Delaney

Mark P. Delaney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10714144
    Abstract: Systems and methods for tagging video content are disclosed. A method includes: receiving a video stream from a user computer device, the video stream including audio data and video data; determining a candidate audio tag based on analyzing the audio data; establishing an audio confidence score of the candidate audio tag based on the analyzing of the audio data; determining a candidate video tag based on analyzing the video data; establishing a video confidence score of the candidate video tag based on the analyzing of the video data; determining a correlation factor of the candidate audio tag relative to the candidate video tag; and assigning a tag to a portion in the video stream based on the correlation factor exceeding a correlation threshold value and at least one of the audio confidence score exceeding an audio threshold value and the video confidence score exceeding a video threshold value.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: July 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mark P. Delaney, Robert H. Grant, Trudy L. Hewitt, Martin A. Oberhofer
  • Publication number: 20200162278
    Abstract: A computer-implemented method for controlling one or more devices within a network. The method detects that the one or more devices, within the network associated with a user, are in-use. The method further detects that the user is in an inactive state, and obtains a plurality of information associated with a plurality of factors for the detected one or more devices. The method further controls the one or more devices within the network associated with the user based on the detected inactive state of the user and the obtained plurality of information. The method further assigns a score for each of the plurality of factors for the one or more devices, aggregates the assigned scores for each of the plurality of factors for the one or more devices, and deactivates the one or more devices within the network of the user based on the aggregated score exceeding a threshold value.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Mark P. Delaney, Robert H. Grant, Charlotte J. Hutchinson
  • Patent number: 10540445
    Abstract: A mechanism is provided for intelligently integrating descriptions of images into surrounding text for a screen reader. A natural language understanding image description is determined for an image in a document. For each sentence of a set of sentences in the text of the document, a relatedness score between the sentence and the natural language understanding image description is determined thereby forming a set of relatedness scores. A highest relatedness score is determined from the set of relatedness scores. The natural language image description is inserted in close proximity to a sentence associated with the highest relatedness score, such that, when the text is read out by the screen reader, the natural language image description of the image is read out in close proximity to the sentence.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: January 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Shadi E. Albouyeh, Mark P. Delaney, Robert H. Grant, Trudy L. Hewitt
  • Publication number: 20190155934
    Abstract: The context in which a user generates a search query is analyzed to generate an improved search query. Search query context may be determined with reference to a user profile or content collected from Internet of Things (IoT) or non-IoT devices located proximate the user. Content may be collected when the search query is generated or at a time before the search query is generated. Content collected for context analysis includes visual display content (screen capture), audio content, and data content.
    Type: Application
    Filed: November 22, 2017
    Publication date: May 23, 2019
    Inventors: Mark P. Delaney, Robert H. Grant, Charlotte Hutchinson
  • Publication number: 20190138598
    Abstract: A mechanism is provided for intelligently integrating descriptions of images into surrounding text for a screen reader. A natural language understanding image description is determined for an image in a document. For each sentence of a set of sentences in the text of the document, a relatedness score between the sentence and the natural language understanding image description is determined thereby forming a set of relatedness scores. A highest relatedness score is determined from the set of relatedness scores. The natural language image description is inserted in close proximity to a sentence associated with the highest relatedness score, such that, when the text is read out by the screen reader, the natural language image description of the image is read out in close proximity to the sentence.
    Type: Application
    Filed: November 3, 2017
    Publication date: May 9, 2019
    Inventors: Shadi E. Albouyeh, Mark P. Delaney, Robert H. Grant, Trudy L. Hewitt
  • Publication number: 20190139576
    Abstract: Systems and methods for tagging video content are disclosed. A method includes: receiving a video stream from a user computer device, the video stream including audio data and video data; determining a candidate audio tag based on analyzing the audio data; establishing an audio confidence score of the candidate audio tag based on the analyzing of the audio data; determining a candidate video tag based on analyzing the video data; establishing a video confidence score of the candidate video tag based on the analyzing of the video data; determining a correlation factor of the candidate audio tag relative to the candidate video tag; and assigning a tag to a portion in the video stream based on the correlation factor exceeding a correlation threshold value and at least one of the audio confidence score exceeding an audio threshold value and the video confidence score exceeding a video threshold value.
    Type: Application
    Filed: November 6, 2017
    Publication date: May 9, 2019
    Inventors: Mark P. Delaney, Robert H. Grant, Trudy L. Hewitt, Martin A. Oberhofer
  • Publication number: 20190121911
    Abstract: Disclosed embodiments provide techniques for automatically suggesting members of a social media system as recipients for sharing an item of social media content. Computerized analysis of the social media content includes image analysis, audio analysis, and language analysis for classifying the content. Profiles of participants within a social media system are examined to determine candidates for sharing. A subset of participants having profiles with metadata deemed relevant to the analyzed social media content is obtained. For each participant in the subset, an interaction score is computed, indicative of the amount of interaction between the participant and the sender of the content has occurred previously. The subset is ranked based on the interaction score. Participants having an interaction score above a predetermined threshold are deemed to be candidates for sharing the item of social media content, and are presented to a user on an electronic display.
    Type: Application
    Filed: October 25, 2017
    Publication date: April 25, 2019
    Inventors: Mark P. Delaney, Robert H. Grant, Trudy L. Hewitt