Search Patents
  • Patent number: 6879999
    Abstract: A method and system for responding to requests for static web documents including saving the response as a packet train comprising one or more IP compliant packets. Upon a subsequent request for the static web document, the saved packet train may be retrieved and the header information updated. In this manner, the network protocol processing required to respond to the request is reduced. The server may include code for determining whether a referenced web object is a static object and a directory of recently accessed static web objects and a copy of the corresponding packet trains. The web server may be configured to consult the directory to determine if an object is a static object that has been recently accessed. If the object has been recently accessed, the server may retrieve the corresponding packet train from its system memory or from disk and update the packet headers prior to transmission.
    Type: Grant
    Filed: July 26, 2001
    Date of Patent: April 12, 2005
    Assignee: International Business Machines Corporation
    Inventor: Elmootazbellah Nabil Elnozahy
  • Publication number: 20030041118
    Abstract: A web server that integrates portions of operating system code to execute substantially within user space to reduce context switching. The web server includes an application level interpreter, such as an HTTP interpreter, configured to process client requests. The web server typically includes a network interface dedicated to process traffic to and from the web server. The web server may include within its user space kernel device driver extensions enabling it to communicate directly with the network interface. The server may implement a polling architecture in which the server periodically monitors the interface for new requests. The web server typically includes a user space transmission protocol library that enables the server to perform its own network processing of requests and responses. The library may include TCP/IP drivers that are optimized or streamlined for to processing HTTP requests.
    Type: Application
    Filed: August 23, 2001
    Publication date: February 27, 2003
    Applicant: International Business Machines Corporation
    Inventors: Elmootazbellah Nabil Elnozahy, Ramakrishnan Rajamony
  • Publication number: 20110292597
    Abstract: A modular processing module is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors coupled to the circuit board, and a plurality of processing nodes coupled to the circuit board. Each processing module side in the set of processing module sides couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system.
    Type: Application
    Filed: May 28, 2010
    Publication date: December 1, 2011
    Applicant: International Business Machines Corporation
    Inventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Wesley M. Felter, Madhusudan K. Iyengar, Thomas W. Keller, JR., Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
  • Patent number: 8650554
    Abstract: A mechanism is provided for improving single-thread performance for a multi-threaded, in-order processor core. In a first phase, a compiler analyzes application code to identify instructions that can be executed in parallel with focus on instruction-level parallelism and removing any register interference between the threads. The compiler inserts as appropriate synchronization instructions supported by the apparatus to ensure that the resulting execution of the threads is equivalent to the execution of the application code in a single thread. In a second phase, an operating system schedules the threads produced in the first phase on the hardware threads of a single processor core such that they execute simultaneously. In a third phase, the microprocessor core executes the threads specified by the second phase such that there is one hardware thread executing an application thread.
    Type: Grant
    Filed: April 27, 2010
    Date of Patent: February 11, 2014
    Assignee: International Business Machines Corporation
    Inventors: Elmootazbellah N. Elnozahy, Ahmed Gheith
  • Patent number: 8370517
    Abstract: A data processing network and method for conserving energy in which an initial negotiation between a network server and a switch to which the server is connected is performed to establish an initial operating frequency of the server-switch link. An effective data rate of the server is determined based on network traffic at the server. Responsive to determining that the effective data rate is materially different than the current operating frequency, a subsequent negotiation is performed to establish a modified operating frequency where the modified operating frequency is closer to the effective data rate than the initial operating frequency. The determination of the effective date rate and the contingent initiation of a subsequent negotiation may be repeated periodically during the operating of the network. In one embodiment, the initial and subsequent negotiation are compliant with the IEEE 802.3 standard.
    Type: Grant
    Filed: September 27, 2001
    Date of Patent: February 5, 2013
    Assignee: International Business Machines Corporation
    Inventors: Patrick Joseph Bohrer, Bishop Chapman Brock, Elmootazbellah Nabil Elnozahy, Ramakrishnan Rajamony, Freeman Leigh Rawson, III
  • Publication number: 20030004948
    Abstract: A system and method of retrieving data from a disk includes determining the network transfer rate of a network connection between a client and a server. A first portion of the requested data is retrieved from the disk responsive to a data request received by the server from the client via a network connection and transmission of the first portion of data to the client via the network is initiated. The time required to transmit the first portion of data to the client is calculated based upon the network transfer rate and a determination of when to retrieve a subsequent portion of the requested data from disk is made based, in part, on whether the calculated time is expired. The determination of when to retrieve subsequent portions of data from disk may be further based on a desire to minimize a system parameter such as memory usage or disk energy consumption and heat dissipation.
    Type: Application
    Filed: June 29, 2001
    Publication date: January 2, 2003
    Applicant: International Business Machines Corporation
    Inventors: Patrick Joseph Bohrer, Bishop Chapman Brock, Elmootazbellah Nabil Elnozahy, Ramakrishnan Rajamony
  • Publication number: 20110004875
    Abstract: A method, a system, an apparatus, and a computer program product for allocating resources of one or more shared devices to one or more partitions of a virtualization environment within a data processing system. At least one user defined resource assignment is received for one or more devices associated with the data processing system. One or more registers, associated with the one or more partitions are dynamically set to execute the at least one resource assignment, whereby the at least one resource assignment enables a user defined quantitative measure (number and/or percentage) of devices to operate when the one or more transactions are executed via the partition. The system enables the one or more devices to execute one or more transactions at a bandwidth/capacity that is less than or equal to the user defined resource assignment and minimizes performance interference among partitions.
    Type: Application
    Filed: July 1, 2009
    Publication date: January 6, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Elmootazbellah N. Elnozahy, Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
  • Patent number: 8358503
    Abstract: A modular processing module is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors coupled to the circuit board, and a plurality of processing nodes coupled to the circuit board. Each processing module side in the set of processing module sides couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: January 22, 2013
    Assignee: International Business Machines Corporation
    Inventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Wesley M. Felter, Madhusudan K. Iyengar, Thomas W. Keller, Jr., Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
  • Publication number: 20020078136
    Abstract: In one embodiment, an improved method for crawling a web site is provided. At least one page of the web site has a reference for executing by a browser to produce an address for a next page. The web site is crawled by the crawler program, which includes querying the web site server. The crawler parses such a reference from one of the web pages, and sends the reference to an applet running in the browser. The address for the next page is determined by the browser responsive to the reference. The address is then sent to the crawler. In an application of the improved crawler, the crawler is used for reducing dynamic data generation on the web site server. In this application, at least some of the web pages are dynamically generated responsive to the crawler queries. The server generated web pages are processed to generate corresponding processed versions of the web pages, so that the processed versions can be served in response to future queries, reducing dynamic generation of web pages by the server.
    Type: Application
    Filed: December 14, 2000
    Publication date: June 20, 2002
    Applicant: International Business Machines Corporation
    Inventors: Elizabeth Adleberg Brodsky, Elmootazbellah Nabil Elnozahy, Ramakrishnan Rajamony
  • Publication number: 20120320524
    Abstract: A modular processing module allowing in-situ maintenance is provided. The modular processing module comprises a set of processing module sides. Each processing module side comprises a circuit board, a plurality of connectors, and a plurality of processing nodes. Each processing module side couples to another processing module side using at least one connector in the plurality of connectors such that, when all of the set of processing module sides are coupled together, the modular processing module is formed. The modular processing module comprises an exterior connection to a power source and a communication system and at least one heatsink that couples to at least a portion of the plurality of processing nodes on one of the processing module sides and is designed such that, when a set of heatsinks in the modular processing module are installed, an empty space is left in a center of the modular processing module.
    Type: Application
    Filed: August 28, 2012
    Publication date: December 20, 2012
    Applicant: International Business Machines Corporation
    Inventors: Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Madhusudan K. Iyengar, Thomas W. Keller, JR., Juan C. Rubio
  • Patent number: 6925529
    Abstract: A method and a computer usable medium including a program for operating disks having units, comprising: providing a first tier of at least one disk, the first tier storing at least one popular unit, providing a second tier of at least one disk, the second tier storing at least one unpopular unit, powering on at least one first tier disk, powering down the second tier, determining whether a request for a unit requires processing on the first tier or second tier, accessing the requested unit if the requested unit requires processing on the first tier, and powering on a second tier disk to copy the requested unit from the second tier disk to a first tier disk, if the requested unit is stored on the second tier.
    Type: Grant
    Filed: July 12, 2001
    Date of Patent: August 2, 2005
    Assignee: International Business Machines Corporation
    Inventors: Patrick J. Bohrer, Elmootazbellah N. Elnozahy, Charles R. Lefurgy, Ramakrishnan Rajamony, Bruce A. Smith
  • Patent number: 8179674
    Abstract: A scalable space-optimized and energy-efficient computing system is provided. The computing system comprises a plurality of modular compartments in at least one level of a frame configured in a hexadron configuration. The computing system also comprises an air inlet, an air mixing plenum, and at least one fan. In the computing system the plurality of modular compartments are affixed above the air inlet, the air mixing plenum is affixed above the plurality of modular compartments, and the at least one fan is affixed above the air mixing plenum. When at least one module is inserted into one of the plurality of modular compartments, the module couples to a backplane within the frame.
    Type: Grant
    Filed: May 28, 2010
    Date of Patent: May 15, 2012
    Assignee: International Business Machines Corporation
    Inventors: John B. Carter, Wael R. El-Essawy, Elmootazbellah N. Elnozahy, Madhusudan K. Iyengar, Thomas W. Keller, Jr., Jian Li, Karthick Rajamani, Juan C. Rubio, William E. Speight, Lixin Zhang
  • Patent number: 6988125
    Abstract: A method and system for responding to a client request for information from a service device in a data processing network. Initially, the server receives the information request from the client and determines that the requested information is not in the server's cache. The server then generates a storage request and sends the storage request to a network attached storage device. The storage device retrieves the requested information and generates a set of packets containing the requested information. The storage device then sending the generated packets simultaneously to the client to satisfy the client request and to the server to refresh the server cache. The storage request may include protocol information corresponding to the client-server connection that the storage device uses to replicate the client-server protocol stack in the generated packet. Sending the generated packet may comprise including a multicast address in the packet.
    Type: Grant
    Filed: July 26, 2001
    Date of Patent: January 17, 2006
    Assignee: International Business Machines Corporation
    Inventors: Elmootazbellah Nabil Elnozahy, Michael David Kistler
  • Patent number: 8756608
    Abstract: A method, a system, an apparatus, and a computer program product for allocating resources of one or more shared devices to one or more partitions of a virtualization environment within a data processing system. At least one user defined resource assignment is received for one or more devices associated with the data processing system. One or more registers, associated with the one or more partitions are dynamically set to execute the at least one resource assignment, whereby the at least one resource assignment enables a user defined quantitative measure (number and/or percentage) of devices to operate when the one or more transactions are executed via the partition. The system enables the one or more devices to execute one or more transactions at a bandwidth/capacity that is less than or equal to the user defined resource assignment and minimizes performance interference among partitions.
    Type: Grant
    Filed: July 1, 2009
    Date of Patent: June 17, 2014
    Assignee: International Business Machines Corporation
    Inventors: Elmootazbellah N. Elnozahy, Ramakrishnan Rajamony, William E. Speight, Lixin Zhang
  • Publication number: 20110296212
    Abstract: A mechanism is provided for scheduling application tasks. A scheduler receives a task that identifies a desired frequency and a desired maximum number of competing hardware threads. The scheduler determines whether a user preference designates either maximization of performance or minimization of energy consumption. Responsive to the user preference designating the performance, the scheduler determines whether there is an idle processor core in a plurality of processor cores available. Responsive to no idle processor being available, the scheduler identifies a subset of processor cores having a smallest load coefficient. From the subset of processor cores, the scheduler determines whether there is at least one processor core that matches desired parameters of the task. Responsive to at least one processor core matching the desired parameters of the task, the scheduler assigns the task to one of the at least one processor core that matches the desired parameters.
    Type: Application
    Filed: May 26, 2010
    Publication date: December 1, 2011
    Applicant: International Business Machines Corporation
    Inventors: Elmootazbellah N. Elnozahy, Heather L. Hanson, Freeman L. Rawson, III, Malcolm S. Ware
  • Publication number: 20120173906
    Abstract: A mechanism is provided for scheduling application tasks. A scheduler receives a task that identifies a desired frequency and a desired maximum number of competing hardware threads. The scheduler determines whether a user preference designates either maximization of performance or minimization of energy consumption. Responsive to the user preference designating the performance, the scheduler determines whether there is an idle processor core in a plurality of processor cores available. Responsive to no idle processor being available, the scheduler identifies a subset of processor cores having a smallest load coefficient. From the subset of processor cores, the scheduler determines whether there is at least one processor core that matches desired parameters of the task. Responsive to at least one processor core matching the desired parameters of the task, the scheduler assigns the task to one of the at least one processor core that matches the desired parameters.
    Type: Application
    Filed: March 9, 2012
    Publication date: July 5, 2012
    Applicant: International Business Machines Corporation
    Inventors: Elmootazbellah N. Elnozahy, Heather L. Hanson, Freeman L. Rawson, III, Malcolm S. Ware
  • Patent number: 7676795
    Abstract: A compiler for incorporating error detection into executable code generates conventional assembler language object code from a source code file. The compiler identifies an error detection segment (EDS) in the assembler code, where the EDS includes a subset of basic blocks in the assembler code. The compiler also identifies register and memory references in the EDS and inserts a set of instructions into the EDS. The inserted instructions record an entry state and an exit state of the referenced registers and memory locations. The state information is stored in a checkpoint portion of system memory. The compiler may generate shadow EDS code including instructions mirroring the instructions in the main EDS and verifying instructions that compare results produced by the mirroring instructions with results produced by the main EDS. The shadow EDS initiates an error recovery process if results produced by the shadow EDS and the main EDS differ.
    Type: Grant
    Filed: January 13, 2005
    Date of Patent: March 9, 2010
    Assignee: International Business Machines Corporation
    Inventor: Elmootazbellah Nabil Elnozahy
  • Patent number: 7437517
    Abstract: Methods, systems, and media for reducing memory latency seen by processors by providing a measure of control over on-chip memory (OCM) management to software applications, implicitly and/or explicitly, via an operating system are contemplated. Many embodiments allow part of the OCM to be managed by software applications via an application program interface (API), and part managed by hardware. Thus, the software applications can provide guidance regarding address ranges to maintain close to the processor to reduce unnecessary latencies typically encountered when dependent upon cache controller policies. Several embodiments utilize a memory internal to the processor or on a processor node so the memory block used for this technique is referred to as OCM.
    Type: Grant
    Filed: January 11, 2005
    Date of Patent: October 14, 2008
    Assignee: International Business Machines Corporation
    Inventors: Dilma Menezes da Silva, Elmootazbellah Nabil Elnozahy, Orran Yaakov Krieger, Hazim Shafi, Xiaowei Shen, Balaram Sinharoy, Robert Brett Tremaine
  • Publication number: 20110231030
    Abstract: A mechanism is provided for minimizing system power in a data processing system. A management control unit determines whether a convergence has been reached in the data processing system. If convergence fails to be reached, the management control unit determines whether a maximum fan flag is set to indicate that a fan is operating at a maximum speed. Responsive to the maximum fan flag failing to be set, a thermal threshold of the data processing system is either increased or decreased and thereby a fan speed of the data processing system is either increased or decreased based on whether the system power of the data processing system has either increased or decreased and based on whether a temperature of the data processing system has either increased or decreased. Thus, a new thermal threshold and a new fan speed are formed. The process is then repeated until convergence has been met.
    Type: Application
    Filed: March 18, 2010
    Publication date: September 22, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John B. Carter, Elmootazbellah N. Elnozahy, Malcolm S. Ware, Wei Huang
  • Patent number: 7934061
    Abstract: Methods, systems, and media for reducing memory latency seen by processors by providing a measure of control over on-chip memory (OCM) management to software applications, implicitly and/or explicitly, via an operating system are contemplated. Many embodiments allow part of the OCM to be managed by software applications via an application program interface (API), and part managed by hardware. Thus, the software applications can provide guidance regarding address ranges to maintain close to the processor to reduce unnecessary latencies typically encountered when dependent upon cache controller policies. Several embodiments utilize a memory internal to the processor or on a processor node so the memory block used for this technique is referred to as OCM.
    Type: Grant
    Filed: June 24, 2008
    Date of Patent: April 26, 2011
    Assignee: International Business Machines Corporation
    Inventors: Dilma Menezes da Silva, Elmootazbellah Nabil Elnozahy, Orran Yaakov Krieger, Hazim Shafi, Xiaowei Shen, Balaram Sinharoy, Robert Brett Tremaine
Narrow Results

Filter by US Classification