Patents by Inventor Margaret H. Szymanski
Margaret H. Szymanski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11817086Abstract: Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.Type: GrantFiled: March 13, 2020Date of Patent: November 14, 2023Assignee: XEROX CORPORATIONInventors: Evgeniy Bart, Margaret H. Szymanski
-
Publication number: 20230325453Abstract: A system and method for providing Website navigation recommendations is provided. A Web page of interest is identified as a destination Web page. A domain of Web pages related to the destination Web page is determined. Information is extracted from each Web page in the domain and a recommendation comprising instructions for navigating to the destination Web page is generated based on the extracted information.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Applicant: PALO ALTO RESEARCH CENTER INCORPORATEDInventors: Kristian Lyngbaek, Lester D. Nelson, Eric A. Bier, Margaret H. Szymanski
-
Patent number: 11727218Abstract: According to one embodiment, a computer-implemented method for dynamically modifying placeholder text in a conversational interface includes: processing a conversation log reflecting a conversation between a human user and an automated agent; determining, based at least in part on the processing: one or more capabilities of the automated agent; and/or a trajectory of the conversation; and dynamically modifying placeholder text in the conversational interface based at least in part on: the one or more capabilities of the automated agent; the trajectory of the conversation; or both the one or more capabilities of the automated agent and the trajectory of the conversation. Other embodiments in the form of systems and computer program products are also disclosed.Type: GrantFiled: October 26, 2018Date of Patent: August 15, 2023Assignee: International Business Machines CorporationInventors: Raphael I. Arar, Robert J. Moore, Guangjie Ren, Margaret H. Szymanski, Eric Y. Liu
-
Patent number: 11334709Abstract: A computer-implemented method according to one embodiment includes identifying a topic associated with a received notification, determining a plurality of policies associated with the topic, determining a current environmental context, determining a generalization level, utilizing the plurality of policies and the current environmental context, modifying the notification, based on the generalization level, and presenting the modified notification.Type: GrantFiled: November 13, 2018Date of Patent: May 17, 2022Assignee: International Business Machines CorporationInventors: Nathalie Baracaldo-Angel, Margaret H. Szymanski, Eric K. Butler, Heiko H. Ludwig
-
Publication number: 20210287664Abstract: Digitized media is received that records a conversation between individuals. Cues are extracted from the digitized media that indicate properties of the conversation. The cues are entered as training data into a machine learning module to create a trained machine learning model. The trained machine learning model is used in a processor to detect other misalignments in subsequent digitized conversations.Type: ApplicationFiled: March 13, 2020Publication date: September 16, 2021Inventors: Evgeniy Bart, Margaret H. Szymanski
-
Patent number: 10936823Abstract: One embodiment provides a method comprising generating a conversational interface for display on an electronic device. The conversational interface facilitates a communication session between a user and an automated conversational agent. The method further comprises performing a real-time analysis of a portion of a user input in response to the user constructing the user input during the communication session, and updating the conversational interface to include real-time feedback indicative of whether the automated conversational agent understands the portion of the user input based on the analysis. The real-time feedback allows the user to adjust the user input before completing the user input.Type: GrantFiled: October 30, 2018Date of Patent: March 2, 2021Assignee: International Business Machines CorporationInventors: Robert J. Moore, Raphael Arar, Guangjie Ren, Margaret H. Szymanski
-
Patent number: 10832679Abstract: One embodiment provides a computer program product for improving accuracy of a transcript of a spoken interaction. The computer program product comprises a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processor to cause the processor to identify a plurality of patterns in the transcript. The plurality of patterns are indicative of a group of acoustically similar words in the transcript and a corresponding local, sequential context of the group of acoustically similar words. The program instructions are further executable by the processor to cause the processor to predict conditional probabilities for the group of acoustically similar words based on a predictive model and the plurality of patterns, detect one or more transcription errors in the transcript based on the conditional probabilities, and correct the one or more transcription errors by applying a multi-pass correction on the one or more transcription errors.Type: GrantFiled: November 20, 2018Date of Patent: November 10, 2020Assignee: International Business Machines CorporationInventors: Margaret H. Szymanski, Robert J. Moore, Sunhwan Lee, Pawan Chowdhary, Shun Jiang, Guangjie Ren, Raphael Arar
-
Publication number: 20200160866Abstract: One embodiment provides a computer program product for improving accuracy of a transcript of a spoken interaction. The computer program product comprises a computer readable storage medium having program instructions embodied therewith. The program instructions are executable by a processor to cause the processor to identify a plurality of patterns in the transcript. The plurality of patterns are indicative of a group of acoustically similar words in the transcript and a corresponding local, sequential context of the group of acoustically similar words. The program instructions are further executable by the processor to cause the processor to predict conditional probabilities for the group of acoustically similar words based on a predictive model and the plurality of patterns, detect one or more transcription errors in the transcript based on the conditional probabilities, and correct the one or more transcription errors by applying a multi-pass correction on the one or more transcription errors.Type: ApplicationFiled: November 20, 2018Publication date: May 21, 2020Inventors: Margaret H. Szymanski, Robert J. Moore, Sunhwan Lee, Pawan Chowdhary, Shun Jiang, Guangjie Ren, Raphael Arar
-
Publication number: 20200151240Abstract: A computer-implemented method according to one embodiment includes identifying a topic associated with a received notification, determining a plurality of policies associated with the topic, determining a current environmental context, determining a generalization level, utilizing the plurality of policies and the current environmental context, modifying the notification, based on the generalization level, and presenting the modified notification.Type: ApplicationFiled: November 13, 2018Publication date: May 14, 2020Inventors: Nathalie Baracaldo-Angel, Margaret H. Szymanski, Eric K. Butler, Heiko H. Ludwig
-
Publication number: 20200134021Abstract: One embodiment provides a method comprising generating a conversational interface for display on an electronic device. The conversational interface facilitates a communication session between a user and an automated conversational agent. The method further comprises performing a real-time analysis of a portion of a user input in response to the user constructing the user input during the communication session, and updating the conversational interface to include real-time feedback indicative of whether the automated conversational agent understands the portion of the user input based on the analysis. The real-time feedback allows the user to adjust the user input before completing the user input.Type: ApplicationFiled: October 30, 2018Publication date: April 30, 2020Inventors: Robert J. Moore, Raphael Arar, Guangjie Ren, Margaret H. Szymanski
-
Publication number: 20200134017Abstract: According to one embodiment, a computer-implemented method for dynamically modifying placeholder text in a conversational interface includes: processing a conversation log reflecting a conversation between a human user and an automated agent; determining, based at least in part on the processing: one or more capabilities of the automated agent; and/or a trajectory of the conversation; and dynamically modifying placeholder text in the conversational interface based at least in part on: the one or more capabilities of the automated agent; the trajectory of the conversation; or both the one or more capabilities of the automated agent and the trajectory of the conversation. Other embodiments in the form of systems and computer program products are also disclosed.Type: ApplicationFiled: October 26, 2018Publication date: April 30, 2020Inventors: Raphael I. Arar, Robert J. Moore, Guangjie Ren, Margaret H. Szymanski, Eric Y. Liu
-
Patent number: 10592611Abstract: Embodiments of the present invention provide a system for automatically extracting conversational structure from a voice record based on lexical and acoustic features. The system also aggregates business-relevant statistics and entities from a collection of spoken conversations. The system may infer a coarse-level conversational structure based on fine-level activities identified from extracted acoustic features. The system improves significantly over previous systems by extracting structure based on lexical and acoustic features. This enables extracting conversational structure on a larger scale and finer level of detail than previous systems, and can feed an analytics and business intelligence platform, e.g. for customer service phone calls. During operation, the system obtains a voice record. The system then extracts a lexical feature using automatic speech recognition (ASR). The system extracts an acoustic feature.Type: GrantFiled: October 24, 2016Date of Patent: March 17, 2020Assignee: Conduent Business Services, LLCInventors: Jesse Vig, Harish Arsikere, Margaret H. Szymanski, Luke R. Plurkowski, Kyle D. Dent, Daniel G. Bobrow, Daniel Davies, Eric Saund
-
Publication number: 20180113854Abstract: Embodiments of the present invention provide a system for automatically extracting conversational structure from a voice record based on lexical and acoustic features. The system also aggregates business-relevant statistics and entities from a collection of spoken conversations. The system may infer a coarse-level conversational structure based on fine-level activities identified from extracted acoustic features. The system improves significantly over previous systems by extracting structure based on lexical and acoustic features. This enables extracting conversational structure on a larger scale and finer level of detail than previous systems, and can feed an analytics and business intelligence platform, e.g. for customer service phone calls. During operation, the system obtains a voice record. The system then extracts a lexical feature using automatic speech recognition (ASR). The system extracts an acoustic feature.Type: ApplicationFiled: October 24, 2016Publication date: April 26, 2018Applicant: Palo Alto Research Center IncorporatedInventors: Jesse Vig, Harish Arsikere, Margaret H. Szymanski, Luke R. Plurkowski, Kyle D. Dent, Daniel G. Bobrow, Daniel Davies, Eric Saund
-
Patent number: 9412377Abstract: A system and method for enhancing visual representation to individuals participating in a conversation is provided. Visual data for a plurality of individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated. Each possible conversational configuration includes one or more pair-wise probabilities of at least two of the individuals. A probability weight is assigned to each of the pair-wise probabilities having a likelihood that the individuals of that pair-wise probability are participating in a conversation. A probability of each possible conversational configuration is determined by combining the probability weights for the pair-wise probabilities of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration.Type: GrantFiled: October 18, 2013Date of Patent: August 9, 2016Assignee: III HOLDINGS 6, LLCInventors: Paul M. Aoki, Margaret H. Szymanski, James Thornton, Daniel H. Wilson, Allison G. Woodruff
-
Patent number: 9232180Abstract: A method and apparatus for controlling data transmission via user-maintained modes is provided. A first audio data stream is recorded on a transmitting electronic apparatus. A second audio data stream is stored on the transmitting electronic apparatus. Transmission of one of the first and the second audio data streams is controlled via a first user-maintained mode when at least the other of the first and the second audio data streams is being transmitted to the electronic apparatus. The one of the first and the second audio data streams is transmitted to a receiving electronic apparatus and the other of the first and the second audio data streams is suspended.Type: GrantFiled: October 27, 2010Date of Patent: January 5, 2016Assignee: Palo Alto Research Center IncorporatedInventors: Paul M. Aoki, Rebecca E. Grinter, Margaret H. Szymanski, James D. Thornton, Allison G. Woodruff
-
Publication number: 20140136508Abstract: A system and method for providing Web site navigation recommendations is provided. A Web page of interest is identified as a destination Web page. A domain of Web pages related to the destination Web page is determined. Information is extracted from each Web page in the domain and a recommendation comprising instructions for navigating to the destination Web page is generated based on the extracted information.Type: ApplicationFiled: November 9, 2012Publication date: May 15, 2014Applicant: Palo Alto Research Center IncorporatedInventors: Kristian Lyngbaek, Lester D. Nelson, Eric A. Bier, Margaret H. Szymanski
-
Patent number: 8676572Abstract: A computer-implemented system and method for enhancing audio to individuals participating in a conversation is provided. Audio data for individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated based on the audio data, and each possible conversational configuration includes one or more subconfigurations of at least two of the individuals. A probability weight is assigned to each of the subconfigurations and includes a likelihood that the individuals of that subconfiguration are participating in one of the conversations. A probability of each possible conversational configuration is determined by combining the probability weights for the subconfigurations of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration. The individuals participating in the conversations are determined based on the most probable configuration.Type: GrantFiled: March 14, 2013Date of Patent: March 18, 2014Assignee: Palo Alto Research Center IncorporatedInventors: Paul M. Aoki, Margaret H. Szymanski, James D. Thornton, Daniel H. Wilson, Allison G. Woodruff
-
Publication number: 20140046665Abstract: A system and method for enhancing visual representation to individuals participating in a conversation is provided. Visual data for a plurality of individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated. Each possible conversational configuration includes one or more pair-wise probabilities of at least two of the individuals. A probability weight is assigned to each of the pair-wise probabilities having a likelihood that the individuals of that pair-wise probability are participating in a conversation. A probability of each possible conversational configuration is determined by combining the probability weights for the pair-wise probabilities of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration.Type: ApplicationFiled: October 18, 2013Publication date: February 13, 2014Applicant: Palo Alto Research Center IncorporatedInventors: Paul M. Aoki, Margaret H. Szymanski, James Thornton, Daniel H. Wilson, Allison G. Woodruff
-
Publication number: 20130204616Abstract: A computer-implemented system and method for enhancing audio to individuals participating in a conversation is provided. Audio data for individuals participating in one or more conversations is analyzed. Possible conversational configurations of the individuals are generated based on the audio data, and each possible conversational configuration includes one or more subconfigurations of at least two of the individuals. A probability weight is assigned to each of the subconfigurations and includes a likelihood that the individuals of that subconfiguration are participating in one of the conversations. A probability of each possible conversational configuration is determined by combining the probability weights for the subconfigurations of that possible conversational configuration. The possible conversational configuration with the highest probability is selected as a most probable configuration. The individuals participating in the conversations are determined based on the most probable configuration.Type: ApplicationFiled: March 14, 2013Publication date: August 8, 2013Applicant: PALO ALTO RESEARCH CENTER INCORPORATEDInventors: Paul M. Aoki, Margaret H. Szymanski, James D. Thornton, Daniel H. Wilson, Allison G. Woodruff
-
Patent number: 8463600Abstract: A system and method for automatically adjusting floor controls based on conversational characteristics is provided. Audio streams are received, which each originate from an audio source. Floor controls for a current configuration including at least a portion of the audio streams are maintained. Conversational characteristics shared by two or more of the audio sources are determined. Possible configurations for the audio streams are identified based on the conversational characteristics. An analysis of the current configuration and the possible configurations is performed. A change threshold comprising a minimum number of timeslices for at least one of the current configuration and one of the possible configurations is applied to the analysis. When the analysis satisfies the change threshold, the floor controls are automatically adjusted. The audio streams are mixed into one or more outputs based on the adjusted floor controls.Type: GrantFiled: February 27, 2012Date of Patent: June 11, 2013Assignee: Palo Alto Research Center IncorporatedInventors: Paul Masami Aoki, Margaret H. Szymanski, James D. Thornton, Daniel H. Wilson, Allison Gyle Woodruff