Patents by Inventor Aaron K. Baughman

Aaron K. Baughman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220300517
    Abstract: From a set of natural language text documents, a concept tree is constructed. For a node in the concept tree a polarity of the subset represented by the node is scored. A second set of natural language text documents is added to the subset, the adding resulting in a modified subset of natural language text documents having a polarity score within a predefined neutral polarity score range. From the modified subset, a bin of sentences is selected according to a sentence selection parameter, a sentence in the bin of sentences being extracted from a selected document in the modified subset. A sentence having a factuality score below a threshold factuality score is removed from the bin of sentences. From the filtered bin of sentences a new natural language text document corresponding to the filtered bin of sentences is generated using a transformer deep learning narration generation model.
    Type: Application
    Filed: March 17, 2021
    Publication date: September 22, 2022
    Applicant: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Gray Franklin Cannon, Stephen C. Hammer, Shikhar Kwatra
  • Patent number: 11431668
    Abstract: Systems and methods for dynamically managing figments are disclosed. A computer-implemented method includes: receiving, by a computing device, a question from a user; answering, by the computing device, the question using a first degree figment; classifying, by the computing device, the question based on topics; forwarding, by the computing device, the question to a set of second degree figments; receiving, by the computing device, answers to the question from the set of second degree figments; ranking, by the computing device, the answers received from the set of second degree figments; and providing, by the computing device, the ranked answers to the user.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: August 30, 2022
    Assignee: KYNDRYL, INC.
    Inventors: Aaron K. Baughman, Christian Eggenberger, Peter K. Malkin, Diwesh Pandey
  • Patent number: 11416743
    Abstract: Fair deep reinforcement learning is provided. A microstate of an environment and reaction of items in a plurality of microstates within the environment are observed after an agent performs an action in the environment. Semi-supervised training is utilized to determine bias weights corresponding to the action for the microstate of the environment and the reaction of the items in the plurality of microstates within the environment. The bias weights from the semi-supervised training are merged with non-bias weights using an artificial neural network. Over time, it is determined where bias is occurring in the semi-supervised training based on merging the bias weights with the non-bias weights in the artificial neural network. A deep reinforcement learning model that decreases reliance on the bias weights is generated based on determined bias to increase fairness.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: August 16, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Stephen C. Hammer, Gray Cannon, Shikhar Kwatra
  • Patent number: 11412271
    Abstract: A method, computer system, and computer program product for AI response to live stream video are provided. The embodiment may include receiving a live video stream. The embodiment may also include capturing a plurality of messages from a user group in a social media chat discussion corresponding to the received live video stream. The embodiment may further include determining a discussion pattern within the plurality of captured messages using natural language processing techniques. The embodiment may also include analyzing the live video stream for one or more questions or comments related to the determined discussion pattern. The embodiment may further include generating a response to the one or more questions or comments related to the determined discussion pattern. The embodiment may also include transmitting the generated response to the one or more questions or comments to the social media chat discussion.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: August 9, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Sarbajit K. Rakshit, John M. Ganci, Jr., Martin G. Keen, James E. Bostick
  • Publication number: 20220246130
    Abstract: A method, computer system, and a computer program product for speech synthesis is provided. The present invention may include generating one or more final voiceprints. The present invention may include generating one or more voice clones based on the one or more final voiceprints. The present invention may include classifying the one or more voice clones into a grouping using a language model, wherein the language model is trained using manually classified uncloned voice samples. The present invention may include identifying a cluster within the grouping, wherein the cluster is identified by determining a difference between corresponding vectors of the one or more voice clones below a similarity threshold. The present invention may include generating a new archetypal voice by blending the one or more voice clones of the cluster where the difference between the corresponding vectors is below the similarity threshold.
    Type: Application
    Filed: January 29, 2021
    Publication date: August 4, 2022
    Inventors: Aaron K. Baughman, Gray Franklin Cannon, Sara Perelman, Gary William Reiss, Corey B. Shelton
  • Patent number: 11379725
    Abstract: Using a simple cue to reduce a number of sequential frames included in a video that needs to be analyzed by an artificial neural network to predict information corresponding to a projectile depicted within the video is provided. A timing of the simple cue associated with the video is detected. The number of sequential frames within the video is reduced down to only those frames that are within a specified range of the simple cue. The artificial neural network is used to analyze the reduced number of sequential frames. The information corresponding to the projectile is predicted based on analyzing the reduced number of sequential frames using the artificial neural network.
    Type: Grant
    Filed: June 29, 2018
    Date of Patent: July 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Stephen C. Hammer, Micah Forster, John C. Newell
  • Publication number: 20220198140
    Abstract: An audio stream of a speaker can be isolated from a received audio signal. Based on the audio stream, an attribute of the speaker can be identified. This attribute can be presented to a user, allowing for a user input. Based on a received user input (and on the audio stream), the audio stream can be modified.
    Type: Application
    Filed: December 21, 2020
    Publication date: June 23, 2022
    Inventors: Craig M. Trim, Melissa Restrepo Conde, Shikhar Kwatra, Aaron K. Baughman
  • Publication number: 20220188563
    Abstract: An embodiment includes determining an experiential state of a first user participating in a mixed-reality experience. The embodiment also includes creating a first driver model that maps a relationship between the experiential state of the first user and a parameter of the mixed-reality experience. The embodiment also includes aggregating the first driver model with a plurality of driver models associated with experiential states and parameters of respective other users. The embodiment also includes creating a first cohort experience model using the aggregated driver models. The embodiment also includes deriving a first cohort experience parameter for the first cohort experience model. The embodiment also includes initiating an automated remedial action for participants in the mixed-reality system being associated with the first cohort experience model and the first cohort experience parameter.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Applicant: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Hernan A Cunico, Martin G. Keen, John M. Ganci, JR.
  • Publication number: 20220188516
    Abstract: A computer receives a multimedia data, where the multimedia data comprises a plurality of frames. The computer converts the multimedia data into a signal wave having a plurality of frequencies and a plurality of amplitudes. The computer determines a frame from the plurality of frames having a pronoun. The computer identifies a topic of the frame. The computer searches for a frame in a media repository having a highest correlation coefficient with the topic of the frame, where the frame from the media repository comprises a bag of objects and resolves the anaphora disambiguation by substituting the pronoun with an object from the bag of objects.
    Type: Application
    Filed: December 10, 2020
    Publication date: June 16, 2022
    Inventors: Aaron K. Baughman, Mauro Marzorati, Gary Francis Diamanti, Nicholas Michael Wilkin
  • Publication number: 20220187902
    Abstract: According to one embodiment, a method, computer system, and computer program product for customizing a mixed reality experience based on the experiential state of users in a queue to join the mixed reality experience or immersed in the mixed reality experience is provided. The present invention may include modeling the experiential state of the at least one user participating in the mixed-reality experience; modeling one or more relationships between the experiential state of the at least one user and one or more physical or virtual experience parameters comprising the mixed-reality experience; based on the one or more modeled relationships, predicting one or more alterations to the one or more physical or virtual experience parameters to enhance the experiential state of the at least one user; and operating a mixed reality system to perform one or more remedial actions to execute the one or more predicted alterations.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Inventors: Aaron K. Baughman, Hernan A. Cunico, John M. Ganci, JR., Martin G. Keen
  • Publication number: 20220188564
    Abstract: Using a first trained generative adversarial network, a first multimedia content is transformed into a text description of the first multimedia content. The text description is adjusted according to a constraint using a trained attention layer, the adjusting creating an adjusted text description. Using a trained model, the adjusted text description is transformed into a second multimedia content, the second multimedia content comprising an adjustment of the first multimedia content according to the constraint.
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Applicant: International Business Machines Corporation
    Inventors: Sai Krishna Reddy Gudimetla, Aaron K. Baughman, Micah Forster, Craig M. Trim
  • Publication number: 20220171665
    Abstract: A processor may analyze, using an AI system, an application, where the application includes one or more application modules. The processor may determine, using the AI system, that an application module is critical based on a contextual scenario. The AI system may be trained utilizing data regarding heat generation of hardware on which the application module is operating. The processor may identify, using the AI system, required resources of the hardware for the application module to function during the contextual scenario. The processor may allocate an availability of the required resources for the application module.
    Type: Application
    Filed: December 2, 2020
    Publication date: June 2, 2022
    Inventors: Aaron K. Baughman, Shikhar Kwatra, Jennifer L. Szkatulski, Sarbajit K. Rakshit
  • Publication number: 20220172436
    Abstract: A method, a structure, and a computer system for tangible mixed reality. Exemplary embodiments may include identifying one or more real objects within an environment and identifying one or more interactions between the one or more real objects, one or more reactive objects, and one or more animations therebetween. The exemplary embodiments may further include generating one or more learning activities for a user of a mixed reality headset that include at least one of the one or more interactions, and deploying the one or more learning activities to the mixed reality headset.
    Type: Application
    Filed: November 30, 2020
    Publication date: June 2, 2022
    Inventors: Aaron K. Baughman, Shikhar Kwatra, Pritesh Patel, Vijay Ekambaram, Prasenjit Dey
  • Publication number: 20220164457
    Abstract: From a first model parameter, an autoencoder network is generated. A reconstruction error for the autoencoder network is measured, the reconstruction error comprising a difference between an input to the autoencoder network and a corresponding output from the autoencoder network, the input to the autoencoder network comprising a portion of an initial set of data. The reconstruction error and a confidence score corresponding to a complexity level of the autoencoder network are aggregated into a level of difficulty score of the autoencoder network. From the level of difficulty score and an initial data access policy level corresponding to the initial set of data, a derived data access policy level corresponding to the initial data access policy level is generated, the derived data access policy level enforcing access to a transformed set of data generated by applying a transformation to the initial set of data.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Applicant: International Business Machines Corporation
    Inventors: Aaron K. Baughman, Shikhar Kwatra, Vijay Ekambaram, Smitkumar Narotambhai Marvaniya
  • Publication number: 20220164360
    Abstract: An embodiment for cognitively enhancing a search query is provided. The embodiment may include receiving a voice query from a user. The embodiment may also include analyzing the voice query. The embodiment may further include identifying an object within a focus area of the user based on the voice query. The embodiment may also include determining whether the identification of the object is confident, and in response to determining the identification of the object is not confident, receiving feedback from the user. In response to determining the identification of the object is confident, the embodiment may further include generating a relationship between a word in the voice query and the identified object. The embodiment may also include delivering an enhanced response to the user based on the identified object and the received feedback.
    Type: Application
    Filed: November 25, 2020
    Publication date: May 26, 2022
    Inventors: Aaron K. Baughman, Indervir Singh Banipal, Shikhar Kwatra, Victor Povar
  • Patent number: 11341689
    Abstract: One or more computer processors create a user-event localization model for an identified remote audience member in a plurality of identified remote audience members for an event. The one or more computer processors generate a virtual audience member based the identified remote audience member utilizing a trained generated adversarial network and one or more user preferences. The one or more computer processors present the generated virtual audience member in a location associated with the event. The one or more computer processors dynamically adjust a presented virtual audience member responsive to one or more event occurrences utilizing the created user-event localization model.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 24, 2022
    Assignee: International Business Machines Corporation
    Inventors: Aaron K Baughman, Sai Krishna Reddy Gudimetla, Stephen C Hammer, Jeffrey D. Amsterdam, Sherif A. Goma
  • Patent number: 11334634
    Abstract: Methods, computer program products, and systems are presented. The method computer program products, and systems can include, for instance: receiving conversation data of a user from a data source, the data source being provided be a voice enabled personal assistant (VEPA); processing the conversation data to return a sentiment parameter value and a topic parameter value for the conversation data; updating one or more functional setting of a computing environment in dependence on the sentiment parameter value and the topic parameter value; receiving subsequent conversation data from the data source; and processing the subsequent conversation data in accordance with the updated one or more functional setting.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: May 17, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Martin G. Keen
  • Patent number: 11335131
    Abstract: A computer-implemented method includes: receiving, by a computer device, sensor data for a plurality of UAVs in a fleet of UAVs; applying, by the computer device, logistic regression to the sensor data; predicting, by the computer device, a probability of malfunction of each UAV in the fleet of UAVs based on the applying; combining, by the computer device, the probability of malfunction of each UAV with a pre-existing malfunction data set to produce an intermediate malfunction data set; generating, by the computer device, additional cases of predicted UAV malfunctions with a GAN, the GAN using the intermediate malfunction data set as initial training data for the GAN; combining, by the computer device, the additional cases with the intermediate malfunction data set to produce a combined malfunction data set; and comparing, by the computer device, the sensor data for a first UAV of the UAVs to the combined malfunction data set.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: May 17, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Aaron K. Baughman, Shikhar Kwatra, Gray Cannon, Gary William Reiss
  • Patent number: 11328342
    Abstract: An approach is provided in which an information handling system receives a request for a product from a first entity that includes multiple components. The information handling system determines that the components include a set of printable 3D components and a set of non-printable 3D components. Next, the information handling system sends a set of component instructions to a second entity to print the set of printable 3D components and, in turn, sends a set of assembly instructions to the second entity to assemble the product using the set of 3D printable components and the set of non-printable 3D components.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: May 10, 2022
    Assignee: Kyndryl, Inc.
    Inventors: Garfield W. Vaughn, Moncef Benboubakeur, Julija Narodicka, Aaron K. Baughman
  • Publication number: 20220138473
    Abstract: A computer-implemented method, system and computer program product for embedding contextual information in an image or video frames. A generative adversarial network (GAN) is trained to provide contextual information to be embedded in an image or video frames, where the contextual information includes text, sound and/or video frames that provides context to the image or video frames. After training the GAN, an image or video frames are received to be embedded with contextual information if necessary. Features are then extracted from the received image/video frames. An image(s) or video frame(s) are identified in a database using the GAN associated with features with a similarity to the extracted features of the received image/video frames that exceeds a threshold value. Such identified images and/or video frames are associated with “references” containing contextual information which are extracted. The received image/video frames are then augmented with the extracted references to provide context.
    Type: Application
    Filed: November 5, 2020
    Publication date: May 5, 2022
    Inventors: Shikhar Kwatra, Mauro Marzorati, Aaron K. Baughman, Kimberly Greene Starks