Patents by Inventor John H. Meiners

John H. Meiners has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220180257
    Abstract: A communication between parties over a network may be performed to complete a specific workflow and thereby complete a task. Portions of the workflow may be performed during the communication and others performed after the communication has ended. However, a standardized workflow may have variations, such as when portions to complete after the call has ended may have been completed during the communication or when an agent provides additional or alternative tasks. By analyzing the conversation, such as with a neural network or other artificial intelligent system, the portion of the second workflow to be completed after the communication has ended may be produced that accurately reflects the tasks to be completed.
    Type: Application
    Filed: December 8, 2020
    Publication date: June 9, 2022
    Inventors: Valentine C. Matula, Manish Negi, John H. Meiners
  • Patent number: 8990537
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for managing free chains of compute resources. A system configured to practice the method divides a free chain of compute resources into a usable part (UP) which contains resources available for immediate allocation and an unusable part (UUP) which contains resources not available for immediate allocation but which become available after a certain minimum number of allocations. The system sorts resources in the UP by block number, and maintains a last used object (LUO) vector, indexed by block number, which records a last object in the UP for each block. Each time the system frees a resource, the system adds the freed resource to a tail of the UUP and promotes an oldest resource in the UUP to the UP. This approach can manage free chains in a manner that is both flaw tolerant and has relatively high performance.
    Type: Grant
    Filed: April 22, 2013
    Date of Patent: March 24, 2015
    Assignee: Avaya Inc.
    Inventor: John H. Meiners
  • Patent number: 8752054
    Abstract: An apparatus and methods are disclosed for intelligently determining when to merge transactions to backup storage. In particular, in accordance with the illustrative embodiment, queued transactions may be merged based on a variety of criteria, including, but not limited to, one or more of the following: the number of queued transactions; the rate of growth of the number of queued transactions; the calendrical time; estimates of the time required to execute the individual transactions; a measure of importance of the individual transactions; the transaction types of the individual transactions; a measure of importance of one or more data updated by the individual transactions; a measure of availability of one or more resources; a current estimate of the time penalty associated with shadowing a page of memory; and the probability of rollback for the individual transactions, and for the merged transaction.
    Type: Grant
    Filed: March 11, 2010
    Date of Patent: June 10, 2014
    Assignee: Avaya Inc.
    Inventors: Jon Louis Bentley, Frank John Boyle, III, Anjur Sundaresan Krishnakumar, Parameshwaran Krishnan, John H. Meiners, Navjot Singh, Shalini Yajnik
  • Publication number: 20130238866
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for managing free chains of compute resources. A system configured to practice the method divides a free chain of compute resources into a usable part (UP) which contains resources available for immediate allocation and an unusable part (UUP) which contains resources not available for immediate allocation but which become available after a certain minimum number of allocations. The system sorts resources in the UP by block number, and maintains a last used object (LUO) vector, indexed by block number, which records a last object in the UP for each block. Each time the system frees a resource, the system adds the freed resource to a tail of the UUP and promotes an oldest resource in the UUP to the UP. This approach can manage free chains in a manner that is both flaw tolerant and has relatively high performance.
    Type: Application
    Filed: April 22, 2013
    Publication date: September 12, 2013
    Applicant: Avaya Inc.
    Inventor: John H. Meiners
  • Patent number: 8499133
    Abstract: An apparatus and method for improving performance in high-availability systems are disclosed. In accordance with the illustrative embodiment, pages of memory of a primary system that are to be shadowed are initially copied to a backup system's memory, as well as to a cache in the primary system. A duplication manager process maintains the cache in an intelligent manner that significantly reduces the overhead required to keep the backup system in sync with the primary system, as well as the cache size needed to achieve a given level of performance. Advantageously, the duplication manager is executed on a different processor core than the application process executing transactions, further improving performance.
    Type: Grant
    Filed: November 12, 2012
    Date of Patent: July 30, 2013
    Assignee: Avaya Inc.
    Inventors: Jon Louis Bentley, Frank John Boyle, Anjur Sundaresan Krishnakumar, Parameshwaran Krishnan, John H. Meiners, Navjot Singh, Shalini Yajnik
  • Patent number: 8429371
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for managing free chains of compute resources. A system configured to practice the method divides a free chain of compute resources into a usable part (UP) which contains resources available for immediate allocation and an unusable part (UUP) which contains resources not available for immediate allocation but which become available after a certain minimum number of allocations. The system sorts resources in the UP by block number, and maintains a last used object (LUO) vector, indexed by block number, which records a last object in the UP for each block. Each time the system frees a resource, the system adds the freed resource to a tail of the UUP and promotes an oldest resource in the UUP to the UP. This approach can manage free chains in a manner that is both flaw tolerant and has relatively high performance.
    Type: Grant
    Filed: March 23, 2010
    Date of Patent: April 23, 2013
    Assignee: Avaya Inc.
    Inventor: John H. Meiners
  • Patent number: 8312239
    Abstract: An apparatus and method for improving performance in high-availability systems are disclosed. In accordance with the illustrative embodiment, pages of memory of a primary system that are to be shadowed are initially copied to a backup system's memory, as well as to a cache in the primary system. A duplication manager process maintains the cache in an intelligent manner that significantly reduces the overhead required to keep the backup system in sync with the primary system, as well as the cache size needed to achieve a given level of performance. Advantageously, the duplication manager is executed on a different processor core than the application process executing transactions, further improving performance.
    Type: Grant
    Filed: September 30, 2009
    Date of Patent: November 13, 2012
    Assignee: Avaya Inc.
    Inventors: Jon Louis Bentley, Frank John Boyle, III, Anjur Sundaresan Krishnakumar, Parameshwaran Krishnan, John H. Meiners, Navjot Singh, Shalini Yajnik
  • Publication number: 20110238942
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for managing free chains of compute resources. A system configured to practice the method divides a free chain of compute resources into a usable part (UP) which contains resources available for immediate allocation and an unusable part (UUP) which contains resources not available for immediate allocation but which become available after a certain minimum number of allocations. The system sorts resources in the UP by block number, and maintains a last used object (LUO) vector, indexed by block number, which records a last object in the UP for each block. Each time the system frees a resource, the system adds the freed resource to a tail of the UUP and promotes an oldest resource in the UUP to the UP. This approach can manage free chains in a manner that is both flaw tolerant and has relatively high performance.
    Type: Application
    Filed: March 23, 2010
    Publication date: September 29, 2011
    Applicant: Avaya Inc.
    Inventor: John H. MEINERS
  • Publication number: 20110225586
    Abstract: An apparatus and methods are disclosed for intelligently determining when to merge transactions to backup storage. In particular, in accordance with the illustrative embodiment, queued transactions may be merged based on a variety of criteria, including, but not limited to, one or more of the following: the number of queued transactions; the rate of growth of the number of queued transactions; the calendrical time; estimates of the time required to execute the individual transactions; a measure of importance of the individual transactions; the transaction types of the individual transactions; a measure of importance of one or more data updated by the individual transactions; a measure of availability of one or more resources; a current estimate of the time penalty associated with shadowing a page of memory; and the probability of rollback for the individual transactions, and for the merged transaction.
    Type: Application
    Filed: March 11, 2010
    Publication date: September 15, 2011
    Applicant: AVAYA INC.
    Inventors: Jon Louis Bentley, Frank John Boyle, III, Anjur Sundaresan Krishnakumar, Parameshwaran Krishnan, John H. Meiners, Navjot Singh, Shalini Yajnik
  • Patent number: 7983410
    Abstract: A method and an apparatus are disclosed that improve how an incoming call is handled across multiple data-processing systems, without some of the disadvantages of the prior art. Specifically, in a telecommunications call when a called telephone number is not associated with a particular in-service terminal, an enhanced terminating system of the call refrains from transmitting ringback to the calling terminal until an appropriate event occurs, such as the receiving of an asynchronous response from the auxiliary data-processing system to which the call has been directed. Depending on the response received, the terminating system might refocus the call or provide other treatment to the call. In some embodiments of the present invention, the terminating system also transmits a feedback signal to the calling terminal to provide status to the calling party on the progress of the call attempt, wherein the feedback signal is different than the ringback signal.
    Type: Grant
    Filed: June 28, 2005
    Date of Patent: July 19, 2011
    Assignee: Avaya Inc.
    Inventors: Sandra R. Abramson, Stephen M. Milton, C. Joanne McMillen, John H. Meiners
  • Publication number: 20110078383
    Abstract: An apparatus and method for improving performance in high-availability systems are disclosed. In accordance with the illustrative embodiment, pages of memory of a primary system that are to be shadowed are initially copied to a backup system's memory, as well as to a cache in the primary system. A duplication manager process maintains the cache in an intelligent manner that significantly reduces the overhead required to keep the backup system in sync with the primary system, as well as the cache size needed to achieve a given level of performance. Advantageously, the duplication manager is executed on a different processor core than the application process executing transactions, further improving performance.
    Type: Application
    Filed: September 30, 2009
    Publication date: March 31, 2011
    Applicant: AVAYA INC.
    Inventors: Jon Louis Bentley, Frank John Boyle, III, Anjur Sundaresan Krishnakumar, Parameshwaran Krishnan, John H. Meiners, Navjot Singh, Shalini Yajnik