Patents by Inventor Margaret H. Szymanski

Margaret H. Szymanski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11817086
    Abstract: Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: November 14, 2023
    Assignee: XEROX CORPORATION
    Inventors: Evgeniy Bart, Margaret H. Szymanski
  • Publication number: 20230325453
    Abstract: A system and method for providing Website navigation recommendations is provided. A Web page of interest is identified as a destination Web page. A domain of Web pages related to the destination Web page is determined. Information is extracted from each Web page in the domain and a recommendation comprising instructions for navigating to the destination Web page is generated based on the extracted information.
    Type: Application
    Filed: June 13, 2023
    Publication date: October 12, 2023
    Applicant: PALO ALTO RESEARCH CENTER INCORPORATED
    Inventors: Kristian Lyngbaek, Lester D. Nelson, Eric A. Bier, Margaret H. Szymanski
  • Patent number: 11727218
    Abstract: According to one embodiment, a computer-implemented method for dynamically modifying placeholder text in a conversational interface includes: processing a conversation log reflecting a conversation between a human user and an automated agent; determining, based at least in part on the processing: one or more capabilities of the automated agent; and/or a trajectory of the conversation; and dynamically modifying placeholder text in the conversational interface based at least in part on: the one or more capabilities of the automated agent; the trajectory of the conversation; or both the one or more capabilities of the automated agent and the trajectory of the conversation. Other embodiments in the form of systems and computer program products are also disclosed.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: August 15, 2023
    Assignee: International Business Machines Corporation
    Inventors: Raphael I. Arar, Robert J. Moore, Guangjie Ren, Margaret H. Szymanski, Eric Y. Liu
  • Patent number: 11334709
    Abstract: A computer-implemented method according to one embodiment includes identifying a topic associated with a received notification, determining a plurality of policies associated with the topic, determining a current environmental context, determining a generalization level, utilizing the plurality of policies and the current environmental context, modifying the notification, based on the generalization level, and presenting the modified notification.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: May 17, 2022
    Assignee: International Business Machines Corporation
    Inventors: Nathalie Baracaldo-Angel, Margaret H. Szymanski, Eric K. Butler, Heiko H. Ludwig
  • Publication number: 20210287664
    Abstract: Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.
    Type: Application
    Filed: March 13, 2020
    Publication date: September 16, 2021
    Inventors: Evgeniy Bart, Margaret H. Szymanski
  • Patent number: 10936823
    Abstract: One embodiment provides a method comprising generating a conversational interface for display on an electronic device. The conversational interface facilitates a communication session between a user and an automated conversational agent. The method further comprises performing a real-time analysis of a portion of a user input in response to the user constructing the user input during the communication session, and updating the conversational interface to include real-time feedback indicative of whether the automated conversational agent understands the portion of the user input based on the analysis. The real-time feedback allows the user to adjust the user input before completing the user input.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: March 2, 2021
    Assignee: International Business Machines Corporation
    Inventors: Robert J. Moore, Raphael Arar, Guangjie Ren, Margaret H. Szymanski
  • Patent number: 10832679
    Abstract: One embodiment provides a computer program product for improving accuracy of a transcript of a spoken interaction. The computer program product comprises a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processor to cause the processor to identify a plurality of patterns in the transcript. The plurality of patterns are indicative of a group of acoustically similar words in the transcript and a corresponding local, sequential context of the group of acoustically similar words. The program instructions are further executable by the processor to cause the processor to predict conditional probabilities for the group of acoustically similar words based on a predictive model and the plurality of patterns, detect one or more transcription errors in the transcript based on the conditional probabilities, and correct the one or more transcription errors by applying a multi-pass correction on the one or more transcription errors.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: November 10, 2020
    Assignee: International Business Machines Corporation
    Inventors: Margaret H. Szymanski, Robert J. Moore, Sunhwan Lee, Pawan Chowdhary, Shun Jiang, Guangjie Ren, Raphael Arar
  • Publication number: 20200160866
    Abstract: One embodiment provides a computer program product for improving accuracy of a transcript of a spoken interaction. The computer program product comprises a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processor to cause the processor to identify a plurality of patterns in the transcript. The plurality of patterns are indicative of a group of acoustically similar words in the transcript and a corresponding local, sequential context of the group of acoustically similar words. The program instructions are further executable by the processor to cause the processor to predict conditional probabilities for the group of acoustically similar words based on a predictive model and the plurality of patterns, detect one or more transcription errors in the transcript based on the conditional probabilities, and correct the one or more transcription errors by applying a multi-pass correction on the one or more transcription errors.
    Type: Application
    Filed: November 20, 2018
    Publication date: May 21, 2020
    Inventors: Margaret H. Szymanski, Robert J. Moore, Sunhwan Lee, Pawan Chowdhary, Shun Jiang, Guangjie Ren, Raphael Arar
  • Publication number: 20200151240
    Abstract: A computer-implemented method according to one embodiment includes identifying a topic associated with a received notification, determining a plurality of policies associated with the topic, determining a current environmental context, determining a generalization level, utilizing the plurality of policies and the current environmental context, modifying the notification, based on the generalization level, and presenting the modified notification.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Inventors: Nathalie Baracaldo-Angel, Margaret H. Szymanski, Eric K. Butler, Heiko H. Ludwig
  • Publication number: 20200134021
    Abstract: One embodiment provides a method comprising generating a conversational interface for display on an electronic device. The conversational interface facilitates a communication session between a user and an automated conversational agent. The method further comprises performing a real-time analysis of a portion of a user input in response to the user constructing the user input during the communication session, and updating the conversational interface to include real-time feedback indicative of whether the automated conversational agent understands the portion of the user input based on the analysis. The real-time feedback allows the user to adjust the user input before completing the user input.
    Type: Application
    Filed: October 30, 2018
    Publication date: April 30, 2020
    Inventors: Robert J. Moore, Raphael Arar, Guangjie Ren, Margaret H. Szymanski
  • Publication number: 20200134017
    Abstract: According to one embodiment, a computer-implemented method for dynamically modifying placeholder text in a conversational interface includes: processing a conversation log reflecting a conversation between a human user and an automated agent; determining, based at least in part on the processing: one or more capabilities of the automated agent; and/or a trajectory of the conversation; and dynamically modifying placeholder text in the conversational interface based at least in part on: the one or more capabilities of the automated agent; the trajectory of the conversation; or both the one or more capabilities of the automated agent and the trajectory of the conversation. Other embodiments in the form of systems and computer program products are also disclosed.
    Type: Application
    Filed: October 26, 2018
    Publication date: April 30, 2020
    Inventors: Raphael I. Arar, Robert J. Moore, Guangjie Ren, Margaret H. Szymanski, Eric Y. Liu
  • Patent number: 10592611
    Abstract: Embodiments of the present invention provide a system for automatically extracting conversational structure from a voice record based on lexical and acoustic features. The system also aggregates business-relevant statistics and entities from a collection of spoken conversations. The system may infer a coarse-level conversational structure based on fine-level activities identified from extracted acoustic features. The system improves significantly over previous systems by extracting structure based on lexical and acoustic features. This enables extracting conversational structure on a larger scale and finer level of detail than previous systems, and can feed an analytics and business intelligence platform, e.g. for customer service phone calls. During operation, the system obtains a voice record. The system then extracts a lexical feature using automatic speech recognition (ASR). The system extracts an acoustic feature.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: March 17, 2020
    Assignee: Conduent Business Services, LLC
    Inventors: Jesse Vig, Harish Arsikere, Margaret H. Szymanski, Luke R. Plurkowski, Kyle D. Dent, Daniel G. Bobrow, Daniel Davies, Eric Saund
  • Publication number: 20180113854
    Abstract: Embodiments of the present invention provide a system for automatically extracting conversational structure from a voice record based on lexical and acoustic features. The system also aggregates business-relevant statistics and entities from a collection of spoken conversations. The system may infer a coarse-level conversational structure based on fine-level activities identified from extracted acoustic features. The system improves significantly over previous systems by extracting structure based on lexical and acoustic features. This enables extracting conversational structure on a larger scale and finer level of detail than previous systems, and can feed an analytics and business intelligence platform, e.g. for customer service phone calls. During operation, the system obtains a voice record. The system then extracts a lexical feature using automatic speech recognition (ASR). The system extracts an acoustic feature.
    Type: Application
    Filed: October 24, 2016
    Publication date: April 26, 2018
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Jesse Vig, Harish Arsikere, Margaret H. Szymanski, Luke R. Plurkowski, Kyle D. Dent, Daniel G. Bobrow, Daniel Davies, Eric Saund
  • Patent number: 9412377
    Abstract: A system and method for enhancing visual representation to individuals participating in a conversation is provided. Visual data for a plurality of individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated. Each possible conversational configuration includes one or more pair-wise probabilities of at least two of the individuals. A probability weight is assigned to each of the pair-wise probabilities having a likelihood that the individuals of that pair-wise probability are participating in a conversation. A probability of each possible conversational configuration is determined by combining the probability weights for the pair-wise probabilities of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration.
    Type: Grant
    Filed: October 18, 2013
    Date of Patent: August 9, 2016
    Assignee: III HOLDINGS 6, LLC
    Inventors: Paul M. Aoki, Margaret H. Szymanski, James Thornton, Daniel H. Wilson, Allison G. Woodruff
  • Patent number: 9232180
    Abstract: A method and apparatus for controlling data transmission via user-maintained modes is provided. A first audio data stream is recorded on a transmitting electronic apparatus. A second audio data stream is stored on the transmitting electronic apparatus. Transmission of one of the first and the second audio data streams is controlled via a first user-maintained mode when at least the other of the first and the second audio data streams is being transmitted to the electronic apparatus. The one of the first and the second audio data streams is transmitted to a receiving electronic apparatus and the other of the first and the second audio data streams is suspended.
    Type: Grant
    Filed: October 27, 2010
    Date of Patent: January 5, 2016
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Paul M. Aoki, Rebecca E. Grinter, Margaret H. Szymanski, James D. Thornton, Allison G. Woodruff
  • Publication number: 20140136508
    Abstract: A system and method for providing Web site navigation recommendations is provided. A Web page of interest is identified as a destination Web page. A domain of Web pages related to the destination Web page is determined. Information is extracted from each Web page in the domain and a recommendation comprising instructions for navigating to the destination Web page is generated based on the extracted information.
    Type: Application
    Filed: November 9, 2012
    Publication date: May 15, 2014
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Kristian Lyngbaek, Lester D. Nelson, Eric A. Bier, Margaret H. Szymanski
  • Patent number: 8676572
    Abstract: A computer-implemented system and method for enhancing audio to individuals participating in a conversation is provided. Audio data for individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated based on the audio data, and each possible conversational configuration includes one or more subconfigurations of at least two of the individuals. A probability weight is assigned to each of the subconfigurations and includes a likelihood that the individuals of that subconfiguration are participating in one of the conversations. A probability of each possible conversational configuration is determined by combining the probability weights for the subconfigurations of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration. The individuals participating in the conversations are determined based on the most probable configuration.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: March 18, 2014
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Paul M. Aoki, Margaret H. Szymanski, James D. Thornton, Daniel H. Wilson, Allison G. Woodruff
  • Publication number: 20140046665
    Abstract: A system and method for enhancing visual representation to individuals participating in a conversation is provided. Visual data for a plurality of individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated. Each possible conversational configuration includes one or more pair-wise probabilities of at least two of the individuals. A probability weight is assigned to each of the pair-wise probabilities having a likelihood that the individuals of that pair-wise probability are participating in a conversation. A probability of each possible conversational configuration is determined by combining the probability weights for the pair-wise probabilities of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration.
    Type: Application
    Filed: October 18, 2013
    Publication date: February 13, 2014
    Applicant: Palo Alto Research Center Incorporated
    Inventors: Paul M. Aoki, Margaret H. Szymanski, James Thornton, Daniel H. Wilson, Allison G. Woodruff
  • Publication number: 20130204616
    Abstract: A computer-implemented system and method for enhancing audio to individuals participating in a conversation is provided. Audio data for individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated based on the audio data, and each possible conversational configuration includes one or more subconfigurations of at least two of the individuals. A probability weight is assigned to each of the subconfigurations and includes a likelihood that the individuals of that subconfiguration are participating in one of the conversations. A probability of each possible conversational configuration is determined by combining the probability weights for the subconfigurations of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration. The individuals participating in the conversations are determined based on the most probable configuration.
    Type: Application
    Filed: March 14, 2013
    Publication date: August 8, 2013
    Applicant: PALO ALTO RESEARCH CENTER INCORPORATED
    Inventors: Paul M. Aoki, Margaret H. Szymanski, James D. Thornton, Daniel H. Wilson, Allison G. Woodruff
  • Patent number: 8463600
    Abstract: A system and method for automatically adjusting floor controls based on conversational characteristics is provided. Audio streams are received, which each originate from an audio source. Floor controls for a current configuration including at least a portion of the audio streams are maintained. Conversational characteristics shared by two or more of the audio sources are determined. Possible configurations for the audio streams are identified based on the conversational characteristics. An analysis of the current configuration and the possible configurations is performed. A change threshold comprising a minimum number of timeslices for at least one of the current configuration and one of the possible configurations is applied to the analysis. When the analysis satisfies the change threshold, the floor controls are automatically adjusted. The audio streams are mixed into one or more outputs based on the adjusted floor controls.
    Type: Grant
    Filed: February 27, 2012
    Date of Patent: June 11, 2013
    Assignee: Palo Alto Research Center Incorporated
    Inventors: Paul Masami Aoki, Margaret H. Szymanski, James D. Thornton, Daniel H. Wilson, Allison Gyle Woodruff