Patents by Inventor Jeffrey Clifford Mogul

Jeffrey Clifford Mogul has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20040111706
    Abstract: Analysis of latencies in a multi-node system includes creating call chains that are inferred from a temporal relationship of calls and returns between the nodes.
    Type: Application
    Filed: December 7, 2002
    Publication date: June 10, 2004
    Inventors: Jeffrey Clifford Mogul, Athicha Muthitacharoen, Janet Lynn Wiener
  • Patent number: 6704798
    Abstract: Information returned by a server to a client includes instructions, executable by either a proxy server or the client, for converting the returned information from a first representation to a second representation. The representation conversion may be made by a proxy server, for example, to make transmission of the returned information to the client more efficient, and/or to render the returned information in a format suitable for display by the client. By having the server embed representation conversion information in the query response, the representation conversion can take into account the type and other characteristics of information being returned, as well as the computational and display characteristics of the client.
    Type: Grant
    Filed: February 8, 2000
    Date of Patent: March 9, 2004
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Jeffrey Clifford Mogul
  • Patent number: 6647419
    Abstract: A server computer handles multiple data flows between itself and other devices. The server has one or more applications that allocate bandwidth to respective flows and a network protocol stack that implements those allocations. When bandwidth allocations are made in accordance with a bandwidth allocation policy, the protocol stack in the network server enforces the policy. The network protocol stack provides a programming interface that allows the application to specify the bandwidth allocation for a flow. The network protocol stack then enforces this allocation unless there is no shortage of available bandwidth.
    Type: Grant
    Filed: September 22, 1999
    Date of Patent: November 11, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Jeffrey Clifford Mogul
  • Publication number: 20030126232
    Abstract: A computer system uses a prefetch prediction model having energy usage parameters to predict the impact of prefetching specified files on the system's energy usage. A prefetch prediction engine utilizes the prefetch prediction model to evaluate the specified files with respect to prefetch criteria, including energy efficiency prefetch criteria, and generates a prefetch decision with respect to each file of the specified files. For each specified file for which the prefetch prediction engine generates an affirmative prefetch decision, an identifying entry is stored in a queue. The computer system fetches files identified by entries in the queue, although some or all of the entries in the queue at any one time may be deleted if it is determined that the identified files are no longer likely to be needed by the computer system.
    Type: Application
    Filed: December 27, 2001
    Publication date: July 3, 2003
    Inventors: Jeffrey Clifford Mogul, Keith Istvan Farkas, Parthasarathy Ranganathan, Eduardo S. Pinheiro
  • Patent number: 6560243
    Abstract: A system receives a flow of data packets via the link and determines a target bandwidth to be allocated to the flow on the link. In response to the flow, the receiving system transmits data to the sending system. The transmitted data control the sending system such that when the sending system transmits subsequent data packets to the receiving system, such subsequent data packets are transmitted at a rate approximating the target bandwidth allocated to the flow. In one embodiment, the rate at which the transmitted data from the receiving system arrive at the sending system determines the rate at which the sending system transmits the subsequent data packets. The receiving system can control the rate by delaying its response to the sending system according to a calculated delay factor. In another embodiment, the data transmitted from the receiving system to the sending system indicate a maximum amount of data that the receiving system will accept from the sending system in a subsequent data transmission.
    Type: Grant
    Filed: April 30, 1999
    Date of Patent: May 6, 2003
    Assignee: Hewlett-Packard Development Company
    Inventor: Jeffrey Clifford Mogul
  • Publication number: 20020073338
    Abstract: Undesirable behavior patterns of computers on a network impact network performance. A system and method are provided for limiting the impact of undesirable behavior of computers on the network. The network, through which packets of data are interchanged between the computers, includes one or more forwarding devices that are controlled or instructed by one or more packet traffic monitors. Each of the packet traffic monitors is configured for monitoring the packets; for determining if the information about the pattern of behavior from any of the computers is trustworthy; for determining, upon discovering that one or more of the patterns of behavior is undesirable, a type of the undesirable pattern behavior; and for determining a proper action for mitigating that type of undesirable behavior. The proper action is performed by mitigation means controlling the one or more forwarding devices.
    Type: Application
    Filed: November 16, 2001
    Publication date: June 13, 2002
    Applicant: Compaq Information Technologies Group, L.P.
    Inventors: Michael Burrows, Raymond P. Stata, Jeffrey Clifford Mogul
  • Patent number: 6052764
    Abstract: An assembly, and an associated method, that facilitates restoration of data to a computer data storage subsystem subsequent to failure and repair of the subsystem. An identification indicia memory contains an up-to-date listing of the file name, or other identification indicia, of data stored at the data storage subsystem. The listing is accessed and used to retrieve a copy of data stored at the storage subsystem prior to its failure. Recovery operations write the copy of the data to the repaired or replaced storage medium of the data storage subsystem.
    Type: Grant
    Filed: December 19, 1997
    Date of Patent: April 18, 2000
    Assignee: Compaq Computer Corportion
    Inventor: Jeffrey Clifford Mogul
  • Patent number: 5953503
    Abstract: In a distributed network, client computers are connected to server computers. The server computers store a plurality of Web pages. The Web pages are partitioned into sets, where each set includes Web pages that are substantially similar in content. A preset compression dictionary is generated for each set of Web pages. In addition, a fingerprint is generated for each preset dictionary. The fingerprints uniquely identify each of the preset dictionaries. When one of the client computers requests one of the Web pages, a compressed form of the Web page is sent along with the fingerprint of the dictionary that was used to compress the Web page. The client computer can then request the preset dictionary in order to decompress the Web page when the client does not have a copy of the preset dictionary.
    Type: Grant
    Filed: October 29, 1997
    Date of Patent: September 14, 1999
    Assignee: Digital Equipment Corporation
    Inventors: Michael David Mitzenmacher, Andrei Zary Broder, Jeffrey Clifford Mogul
  • Patent number: 5802292
    Abstract: A method for predictive prefetching of objects over a computer network including the steps of providing a client computer system, providing a server computer system, the server computer system having a memory, a network link to the client computer system, the network link also providing connection of the server computer system to the computer network, requesting from the server computer system by the client computer system a retrieval of a plurality of objects, retrieving the plurality of objects by the server system, storing the retrieval and an identity of the client computer system in the memory of the server computer system, sending the plurality of objects from the server computer system to the client computer system over the network link, predicting in the server computer system a plurality of subsequent retrieval requests from the client computer system according to a predetermined criteria, sending the prediction to the client computer system, and prefetching by the client computer system an object ba
    Type: Grant
    Filed: April 28, 1995
    Date of Patent: September 1, 1998
    Assignee: Digital Equipment Corporation
    Inventor: Jeffrey Clifford Mogul
  • Patent number: 5790799
    Abstract: In a computer network, a method of random sampling of network packets is provided including the steps of providing a network switch, providing a monitoring device, the monitoring having a memory and a data storage unit, providing a network interface to connect the network switch to the monitoring switch, selecting a reference error check code value in the monitoring device, receiving a first network packet from the network switch, comparing, in the network monitoring device, the reference error check code with an error check code of a first network packet, storing the first network packet in the monitoring device if the error check code value of the first network packet matches the reference error check code of the first network packet, and repeating the steps of receiving, comparing and storing for subsequent network packets.
    Type: Grant
    Filed: June 9, 1997
    Date of Patent: August 4, 1998
    Assignee: Digital Equipment Corporation
    Inventor: Jeffrey Clifford Mogul
  • Patent number: 5675763
    Abstract: A cache memory system and method for selectively removing stale "aliased" entries, which arise when portions of several address spaces are mapped into a single region of real memory, from a virtually addressed cache, are described. The cache memory system includes a central processor unit (CPU) and a first-level cache on an integrated circuit chip. The CPU receives tag and data information from the first level cache via virtual address lines and data lines respectively. An off-chip second level cache is additionally coupled to provide data to the data lines. The CPU is coupled to a translation lookaside buffer (TLB) via the virtual address lines, while the second level cache is coupled to the TLB via physical address lines. The first and second level caches each comprise a plurality of entries. Each of the entries includes a status bit, indicating possible membership in a class of entries that might require flushing.
    Type: Grant
    Filed: August 4, 1995
    Date of Patent: October 7, 1997
    Assignee: Digital Equipment Corporation
    Inventor: Jeffrey Clifford Mogul