Patents by Inventor Willie L. Scott

Willie L. Scott has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11947437
    Abstract: Provided is a method, computer program product, and system for automatically assigning robotic devices to users based on need using predictive analytics. A processor may monitor activities performed by one or more users. The processor may determine, based on the monitoring, a set of activities that require assistance from a robotic device when being performed by the one or more users. The processor may match the set of activities to a set of capabilities related to a plurality of robotic devices. The processor may identify, based on the matching, a first robotic device that is capable of assisting the one or more users in performing a first activity of the set of activities. The processor may deploy the first robotic device to assist the one or more users in performing the first activity.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: April 2, 2024
    Assignee: International Business Machines Corporation
    Inventors: Willie L. Scott, II, Charu Pandhi, Seema Nagar, Kuntal Dey
  • Patent number: 11526801
    Abstract: In an approach for a conversational search in a content management system, a processor trains a deep learning model to learn semantic analysis of a plurality of user queries to identify intents and entities in the user queries. A processor analyzes the content management system to extract content keywords to generate a domain ontology. A processor augments the domain ontology based on the identified intents and entities in the user queries by the deep learning model. A processor tags the content keywords with metadata based on the domain ontology. A processor maps the intents and entities extracted from a current user query of a user to the content keywords extracted from the content management system to form a metadata keyword. A processor searches the content management system for a content based on the metadata keyword. A processor returns a search result for the current user query.
    Type: Grant
    Filed: May 30, 2019
    Date of Patent: December 13, 2022
    Assignee: International Business Machines Corporation
    Inventors: Willie L. Scott, II, Sharon D. Snider
  • Patent number: 11373373
    Abstract: Techniques for translating air writing to augmented reality (AR) devices are described. In one embodiment, a method includes receiving indications of gestures from an originator. The indications identify movement in three dimensions that correspond to an emphasis conferred on one or more words that are air-written by the originator and are configured to be displayed by a plurality of AR devices. The method includes analyzing the identified movement to determine a gesture type associated with the emphasis, where the gesture type includes a first emphasis to be conferred on the air-written words. The method includes providing a display of the air-written words on a first AR device of the plurality of AR devices. The first emphasis is conferred on the words on the display of the first AR device using a first gesture display style based on a profile of a first user utilizing the first AR device.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: June 28, 2022
    Assignee: International Business Machines Corporation
    Inventors: Willie L. Scott, II, Charu Pandhi, Seema Nagar, Kuntal Dey
  • Patent number: 11354514
    Abstract: A content clarification server receives at least one language element entered by a user into a client computer, where the user works in a first area of specialization. The content clarification server extracts a set of concepts found in the at least one language element, and launches an auction bidding process for replacing original language in the at least one language element to content clarification providers who provide replacement language that clarifies a meaning of the at least one language element. The content clarification server filters out replacement language from content clarification providers that work in a second area of specialization that is different from the first area of specialization in which the user works, and identifies winning replacement language, from the filtered out replacement language, for the original language from one of the content clarification providers. The content clarification server replaces the original language with the winning replacement language.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: June 7, 2022
    Assignee: International Business Machines Corporation
    Inventors: Kuntal Dey, Anil U. Joshi, Puthukode G. Ramachandran, Willie L. Scott, II
  • Patent number: 11294943
    Abstract: Systems, methods, and computer-readable media are disclosed for associating and reconciling disparate key-value pairs corresponding to a target entity across multiple organizational entities using a distributed match. A shared output mapping may be generated that associates and reconciles common and/or conceptually aligned key-value pairs across the multiple organizational entities. The shared output mapping allows any given organizational entity to leverage information known to other organizational entities about a target entity. In this manner, the organizational entities participate in an information sharing ecosystem that enables each organizational entity to provide a user with a more optimally customized user experience based on the greater breadth of information available through the shared output mapping.
    Type: Grant
    Filed: December 8, 2017
    Date of Patent: April 5, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas A. Brunet, Pushpalatha M. Hiremath, Soma Shekar Naganna, Willie L. Scott, II
  • Publication number: 20220035727
    Abstract: Provided is a method, computer program product, and system for automatically assigning robotic devices to users based on need using predictive analytics. A processor may monitor activities performed by one or more users. The processor may determine, based on the monitoring, a set of activities that require assistance from a robotic device when being performed by the one or more users. The processor may match the set of activities to a set of capabilities related to a plurality of robotic devices. The processor may identify, based on the matching, a first robotic device that is capable of assisting the one or more users in performing a first activity of the set of activities. The processor may deploy the first robotic device to assist the one or more users in performing the first activity.
    Type: Application
    Filed: July 29, 2020
    Publication date: February 3, 2022
    Inventors: Willie L. Scott, II, Charu Pandhi, Seema Nagar, Kuntal Dey
  • Patent number: 11199900
    Abstract: A method modifies a user interface based on a user having nystagmus. One or more processors collect eye gaze data points of a viewer of a user interface that displays content. The processor(s) determine, based on the eye gaze data points, that the viewer of the user interface has nystagmus. Responsive to determining that the viewer of the user interface has nystagmus based on the eye gaze data points, the processor(s) modifies the user interface.
    Type: Grant
    Filed: March 8, 2020
    Date of Patent: December 14, 2021
    Assignee: International Business Machines Corporation
    Inventors: Kuntal Dey, Anil U. Joshi, Puthukode G. Ramachandran, Willie L. Scott, II
  • Publication number: 20210209365
    Abstract: Embodiments herein provide an augmented reality (AR) system that uses sound localization to identify sounds that may be of interest to a user and generates an audio description of the source of the sound as well as AR content that can be magnified and displayed to the user. In one embodiment, an AR device captures images that have the source of the sound within their field of view. Using machine learning (ML) techniques, the AR device can identify the object creating the sound (i.e., the sound source). A description of the sound source and its actions can outputted to the user. In parallel, the AR device can also generate AR content for the sound source. For example, the AR device can magnify the sound source to a size that is viewable to the user and create AR content that is then superimposed onto a display.
    Type: Application
    Filed: January 2, 2020
    Publication date: July 8, 2021
    Inventors: Willie L SCOTT, II, Seema NAGAR, Charu PANDHI, Kuntal DEY
  • Patent number: 11055533
    Abstract: Embodiments herein provide an augmented reality (AR) system that uses sound localization to identify sounds that may be of interest to a user and generates an audio description of the source of the sound as well as AR content that can be magnified and displayed to the user. In one embodiment, an AR device captures images that have the source of the sound within their field of view. Using machine learning (ML) techniques, the AR device can identify the object creating the sound (i.e., the sound source). A description of the sound source and its actions can outputted to the user. In parallel, the AR device can also generate AR content for the sound source. For example, the AR device can magnify the sound source to a size that is viewable to the user and create AR content that is then superimposed onto a display.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: July 6, 2021
    Assignee: International Business Machines Corporation
    Inventors: Willie L Scott, II, Seema Nagar, Charu Pandhi, Kuntal Dey
  • Patent number: 11042259
    Abstract: An approach is provided in which the approach the approach deconstructs a user interface into user interface elements that each are assigned an importance score. The approach compares a user eye gaze pattern of a user viewing the user interface against an expected eye gaze pattern corresponding to the user interface, and determines that the user requires assistance navigating the user interface. The approach selects one of the user interface elements based on its importance score, generates an augmented reality overlay of the selected user interface element, and displays the augmented reality overlay on the user interface using an augmented reality device.
    Type: Grant
    Filed: August 18, 2019
    Date of Patent: June 22, 2021
    Assignee: International Business Machines Corporation
    Inventors: Willie L. Scott, II, Charu Pandhi, Mohit Jain, Kuntal Dey
  • Publication number: 20210118232
    Abstract: Techniques for translating air writing to augmented reality (AR) devices are described. In one embodiment, a method includes receiving indications of gestures from an originator. The indications identify movement in three dimensions that correspond to an emphasis conferred on one or more words that are air-written by the originator and are configured to be displayed by a plurality of AR devices. The method includes analyzing the identified movement to determine a gesture type associated with the emphasis, where the gesture type includes a first emphasis to be conferred on the air-written words. The method includes providing a display of the air-written words on a first AR device of the plurality of AR devices. The first emphasis is conferred on the words on the display of the first AR device using a first gesture display style based on a profile of a first user utilizing the first AR device.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 22, 2021
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Willie L. SCOTT, II, Charu PANDHI, Seema NAGAR, Kuntal DEY
  • Publication number: 20210048938
    Abstract: An approach is provided in which the approach the approach deconstructs a user interface into user interface elements that each are assigned an importance score. The approach compares a user eye gaze pattern of a user viewing the user interface against an expected eye gaze pattern corresponding to the user interface, and determines that the user requires assistance navigating the user interface. The approach selects one of the user interface elements based on its importance score, generates an augmented reality overlay of the selected user interface element, and displays the augmented reality overlay on the user interface using an augmented reality device.
    Type: Application
    Filed: August 18, 2019
    Publication date: February 18, 2021
    Inventors: Willie L. Scott, II, Charu Pandhi, Mohit Jain, Kuntal Dey
  • Publication number: 20200380402
    Abstract: In an approach for a conversational search in a content management system, a processor trains a deep learning model to learn semantic analysis of a plurality of user queries to identify intents and entities in the user queries. A processor analyzes the content management system to extract content keywords to generate a domain ontology. A processor augments the domain ontology based on the identified intents and entities in the user queries by the deep learning model. A processor tags the content keywords with metadata based on the domain ontology. A processor maps the intents and entities extracted from a current user query of a user to the content keywords extracted from the content management system to form a metadata keyword. A processor searches the content management system for a content based on the metadata keyword. A processor returns a search result for the current user query.
    Type: Application
    Filed: May 30, 2019
    Publication date: December 3, 2020
    Inventors: Willie L. Scott, II, Sharon D. Snider
  • Patent number: 10782777
    Abstract: Provided are systems, methods, and media for real-time alteration of video. An example method includes presenting a video to a user. The method includes monitoring a gaze point of the user as the user views one or more frames of the video. The method includes, in response to a determination that the monitored gaze point of the user is different from a predetermined target gaze point, changing the orientation of the video to reposition the target gaze point of the video to the monitored gaze point of the user, in which the orientation of the video is changed during the presentation of the video to the user.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: September 22, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Willie L. Scott, II, Kuntal Dey, Mohit Jain, Charu Pandhi
  • Patent number: 10739864
    Abstract: A gesture to speech conversion device may receive indications of user gestures via at least one sensor, the indications identifying movement in three dimensions. A 2-dimensional (2D) plane on which a beginning of the movement and an end of the movement is substantially planar and a third dimension orthogonal to the 2D plane may be determined. A change of the movement in a direction of the third dimension in a course of the movement occurring on the 2D plane is detected. The change of the movement in the third dimension is mapped to an emphasis in the movement. The movement is transformed into speech with emphasis on a part of the speech corresponding to a part of the movement having the detected change.
    Type: Grant
    Filed: December 31, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Willie L. Scott, II, Charu Pandhi, Seema Nagar, Kuntal Dey
  • Publication number: 20200209976
    Abstract: A gesture to speech conversion device may receive indications of user gestures via at least one sensor, the indications identifying movement in three dimensions. A 2-dimensional (2D) plane on which a beginning of the movement and an end of the movement is substantially planar and a third dimension orthogonal to the 2D plane may be determined. A change of the movement in a direction of the third dimension in a course of the movement occurring on the 2D plane is detected. The change of the movement in the third dimension is mapped to an emphasis in the movement. The movement is transformed into speech with emphasis on a part of the speech corresponding to a part of the movement having the detected change.
    Type: Application
    Filed: December 31, 2018
    Publication date: July 2, 2020
    Inventors: Willie L. Scott, II, Charu Pandhi, Seema Nagar, Kuntal Dey
  • Publication number: 20200209962
    Abstract: A method modifies a user interface based on a user having nystagmus. One or more processors collect eye gaze data points of a viewer of a user interface that displays content. The processor(s) determine, based on the eye gaze data points, that the viewer of the user interface has nystagmus. Responsive to determining that the viewer of the user interface has nystagmus based on the eye gaze data points, the processor(s) modifies the user interface.
    Type: Application
    Filed: March 8, 2020
    Publication date: July 2, 2020
    Inventors: KUNTAL DEY, ANIL U. JOSHI, PUTHUKODE G. RAMACHANDRAN, WILLIE L. SCOTT, II
  • Publication number: 20200174559
    Abstract: Provided are systems, methods, and media for real-time alteration of video. An example method includes presenting a video to a user. The method includes monitoring a gaze point of the user as the user views one or more frames of the video. The method includes, in response to a determination that the monitored gaze point of the user is different from a predetermined target gaze point, changing the orientation of the video to reposition the target gaze point of the video to the monitored gaze point of the user, in which the orientation of the video is changed during the presentation of the video to the user.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Inventors: Willie L. Scott, II, Kuntal Dey, Mohit Jain, Charu Pandhi
  • Publication number: 20200175231
    Abstract: A method replaces original language in a language element with replacement language. A content clarification server receives at least one language element, and launches an auction bidding process for replacing original language in the at least one language element. The content clarification server determines a subject matter expertise of a user who provided the at least one language element, and restricts the auction bidding process to only content clarification providers who have the subject matter expertise of the user who provided the at least one language element. The content clarification server receives replacement language for the original language from one of the content clarification providers who have the subject matter expertise of the user who provided the at least one language element, such that the replacement language is a winning replacement bid in the auction bidding process. The content clarification server then replaces the original language with the replacement language.
    Type: Application
    Filed: February 7, 2020
    Publication date: June 4, 2020
    Inventors: KUNTAL DEY, ANIL U. JOSHI, PUTHUKODE G. RAMACHANDRAN, WILLIE L. SCOTT, II
  • Patent number: 10656706
    Abstract: A method modifies a computer-based interaction based on gaze data. One or more processors collect eye gaze data points to create an eye gaze corpus of information, where the eye gaze data points describe an eye gaze of viewers of a first set of at least one user interface. The processor(s) generate a plurality of clusters of viewers, and determine a target action performance for each of the plurality of clusters. The processor(s) collect, from a device having eye tracking technology, real time eye gaze data from a plurality of current users who are viewing a second set of at least one user interface, and segment the plurality of current users. The processor(s) then modify a computer-based interaction for at least one segment in order to maximize target action performance of the second set of at least one user interface.
    Type: Grant
    Filed: December 4, 2017
    Date of Patent: May 19, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kuntal Dey, Anil U. Joshi, Puthukode G. Ramachandran, Willie L. Scott, II