With Age List, E.g., Queue, Mru-lru List, Etc. (epo) Patents (Class 711/E12.072)
-
Patent number: 12164433Abstract: A computer implemented method includes receiving a first request at a cache for first data and checking the cache for the first data. In response to the first data residing in the cache, the first data is provided from the cache. In response to the first data not residing in the cache, a first memory request is sent to memory for the first data, a first request pending bit to is set indicate the first request is pending, and the cache proceeds to process a next request for second data.Type: GrantFiled: March 29, 2022Date of Patent: December 10, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Ahmed Abdelsalam, Ezzeldin Hamed, Robert Groza, Jr.
-
Patent number: 12153541Abstract: Embodiments are generally directed to cache structure and utilization. An embodiment of an apparatus includes one or more processors including a graphics processor; a memory for storage of data for processing by the one or more processors; and a cache to cache data from the memory; wherein the apparatus is to provide for dynamic overfetching of cache lines for the cache, including receiving a read request and accessing the cache for the requested data, and upon a miss in the cache, overfetching data from memory or a higher level cache in addition to fetching the requested data, wherein the overfetching of data is based at least in part on a current overfetch boundary, and provides for data is to be prefetched extending to the current overfetch boundary.Type: GrantFiled: February 17, 2022Date of Patent: November 26, 2024Assignee: INTEL CORPORATIONInventors: Altug Koker, Lakshminarayanan Striramassarma, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Sean Coleman, Varghese George, Pattabhiraman K, Mike MacPherson, Subramaniam Maiyuran, Elmoustapha Ould-Ahmed-Vall, Vasanth Ranganathan, Joydeep Ray, Jayakrishna P S, Prasoonkumar Surti
-
Patent number: 12141614Abstract: Embodiments relate to storing hierarchically structured sub-items of scene entities in a flattened list of sub-items and performing time-constrained tasks on the sub-items in the flattened list. By storing the sub-items in the flattened list, an approximate time for processing the sub-items can be estimated more accurately, and therefore, reduces the likelihood of making overly conservative estimate of time for processing the sub-items. One or more sub-items of updated scene entities are extracted by a plurality of collectors that are executed in parallel to store the one or more sub-items in the flattened list. The sub-items are then accessed by multiple tasks executed in parallel to determine priority information associated with inclusion and rendering in subsequent frames. Sub-items with higher priority according to the priority information is given higher priority for retrieving from secondary memory and saving in primary memory.Type: GrantFiled: October 12, 2021Date of Patent: November 12, 2024Assignee: Square Enix Ltd.Inventor: Lucas Magder
-
Patent number: 12066975Abstract: Embodiments are generally directed to cache structure and utilization. An embodiment of an apparatus includes one or more processors including a graphics processor; a memory for storage of data for processing by the one or more processors; and a cache to cache data from the memory; wherein the apparatus is to provide for dynamic overfetching of cache lines for the cache, including receiving a read request and accessing the cache for the requested data, and upon a miss in the cache, overfetching data from memory or a higher level cache in addition to fetching the requested data, wherein the overfetching of data is based at least in part on a current overfetch boundary, and provides for data is to be prefetched extending to the current overfetch boundary.Type: GrantFiled: March 14, 2020Date of Patent: August 20, 2024Assignee: INTEL CORPORATIONInventors: Altug Koker, Lakshminarayanan Striramassarma, Aravindh Anantaraman, Valentin Andrei, Abhishek R. Appu, Sean Coleman, Varghese George, K Pattabhiraman, Mike MacPherson, Subramaniam Maiyuran, ElMoustapha Ould-Ahmed-Vall, Vasanth Ranganathan, Joydeep Ray, S Jayakrishna P, Prasoonkumar Surti
-
Patent number: 11907128Abstract: A technique for managing a storage system involves determining, in response to a first write operation on a first data block on a persistent storage device, whether a first group of data corresponding to the first data block is included in a cache; updating the first group of data in the cache if it is determined that the first group of data is included in the cache; and adding the first group of data to an associated data set of the cache to serve as a first record. Accordingly, such a technique can associatively manage different types of cached data corresponding to a data block, thereby optimizing the system performance.Type: GrantFiled: May 10, 2022Date of Patent: February 20, 2024Assignee: EMC IP Holding Company LLCInventors: Ming Zhang, Chen Gong, Qiaosheng Zhou
-
Patent number: 11893269Abstract: A memory system includes a memory device and a controller. The memory device includes plural storage regions including plural non-volatile memory cells. The plural storage regions have a different data input/output speed. The controller is coupled to the memory device via at least one data path. The controller performs a readahead operation in response to a read request input from an external device, determines a data attribute regarding readahead data, obtained by the readahead operation, based on a time difference between reception of the read request and completion of the readahead operation, and stores the readahead data in one of the plural storage regions based on the data attribute.Type: GrantFiled: March 4, 2022Date of Patent: February 6, 2024Assignee: SK hynix Inc.Inventors: Jun Hee Ryu, Kwang Jin Ko, Young Pyo Joo
-
Patent number: 11829302Abstract: Examples described herein relate to In some examples, a least recently used (LRU) list used to evict nodes from a cache is traversed, the nodes referencing data blocks storing data in a storage device, the nodes being leaf nodes in a tree representing a file system, and for each traversed node, determining a stride length as an absolute value of a difference between a block offset value of the traversed node and a block offset value of a previously traversed node, comparing the stride length to a proximity threshold, and updating a sequential access pattern counter based at least in part on the comparing; and proactively prefetching nodes from the storage device to the cache when the sequential access pattern counter indicates a detected pattern of sequential accesses to nodes.Type: GrantFiled: September 3, 2021Date of Patent: November 28, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Annmary Justine Koomthanam, Jothivelavan Sivashanmugam
-
Patent number: 11797452Abstract: Various implementations described herein relate to systems and methods for dynamically managing buffers of a storage device, including receiving, by a controller of the storage device from a host, information indicative of a frequency by which data stored in the storage device is accessed, and in response to receiving the information determining, by the controller, the order by which read buffers of the storage device are allocated for a next read command. The NAND read count of virtual Word-Lines (WLs) are also used to cache more frequently accessed WLs, thus proactively reducing read disturb and consequently increasing NAND reliability and NAND life.Type: GrantFiled: July 18, 2022Date of Patent: October 24, 2023Assignee: KIOXIA CORPORATIONInventors: Saswati Das, Manish Kadam, Neil Buxton
-
Patent number: 11758203Abstract: Devices, computer-readable media, and methods for making a cache admission decision regarding a video chunk are described. For instance, a processing system including at least one processor may obtain a request for a first chunk of a first video, determine that the first chunk is not stored in a cache, and apply, in response to the determining that the first chunk is not stored in the cache, a classifier to predict whether the first chunk will be re-requested within a time horizon, where the classifier is trained in accordance with a set of features associated with a plurality of chunks of a plurality of videos. When it is predicted via the classifier that the first chunk will be re-requested within the time horizon, the processing system may store the first chunk in the cache.Type: GrantFiled: December 13, 2019Date of Patent: September 12, 2023Assignees: AT&T Intellectual Property I, L.P., University of Southern CaliforniaInventors: Shuai Hao, Subhabrata Sen, Emir Halepovic, Zahaib Akhtar, Ramesh Govindan, Yaguang Li
-
Patent number: 11726922Abstract: Methods, systems, and computer program products for memory protection in hypervisor environments are provided herein. A method includes maintaining, by a memory management layer of a hypervisor environment, a blockchain-based hash chain associated with a page table of the memory management layer, the page table corresponding to a plurality of memory pages; and verifying, by the first memory management layer, content obtained in connection with a read operation for a given one of the plurality of memory pages based at least in part on hashes maintained for the given memory page in the blockchain-based hash chain.Type: GrantFiled: February 25, 2020Date of Patent: August 15, 2023Assignee: International Business Machines CorporationInventors: Akshar Kaul, Krishnasuri Narayanam, Ken Kumar, Pankaj S. Dayama
-
Patent number: 11567878Abstract: An apparatus to facilitate data cache security is disclosed. The apparatus includes a cache memory to store data; and prefetch hardware to pre-fetch data to be stored in the cache memory, including a cache set monitor hardware to determine critical cache addresses to monitor to determine processes that retrieve data from the cache memory; and pattern monitor hardware to monitor cache access patterns to the critical cache addresses to detect potential side-channel cache attacks on the cache memory by an attacker process.Type: GrantFiled: December 23, 2020Date of Patent: January 31, 2023Assignee: Intel CorporationInventors: Abhishek Basak, Erdem Aktas
-
Patent number: 11561712Abstract: The present technology relates to an electronic device. According to the present technology, a storage device having an improved physical address obtainment speed may include a nonvolatile memory device configured to store map data including a plurality of map segments including mapping information and, a volatile memory device including a first map cache area temporarily storing the map data configured by map entries each corresponding to one logical address, and a second map cache area temporarily storing the map data configured by map indexes each corresponding to a plurality of logical addresses.Type: GrantFiled: April 23, 2021Date of Patent: January 24, 2023Assignee: SK hynix Inc.Inventor: Eu Joon Byun
-
Patent number: 11467964Abstract: A system includes a first counter configured to increment or decrement in response to a triggering event. The first counter is sized to overflow. The system also includes a second counter configured to increment or decrement in response to a triggering event. The first counter and the second counter are merged to form a third counter in response to detecting an overflow triggering event for the first counter. A merge bit indicative of whether the first counter and the second counter are merged changes value in response to merging the first counter and the second counter.Type: GrantFiled: August 31, 2020Date of Patent: October 11, 2022Assignee: Marvell Asia Pte LtdInventors: Nagesh Bangalore Lakshminarayana, Pranith Kumar Denthumdas, Rabin Sugumar
-
Patent number: 11394690Abstract: A hypervisor receives an outbound network packet from a first virtual machine for secure communication with a second virtual machine, wherein the network packet includes source and destination logical network addresses, a first payload, and a network packet integrity value determined using a cryptographic session key for a current secure session between the first and second virtual machines. The hypervisor transforms the outbound network packet by replacing the logical network addresses with current physical network addresses and subsequently recalculating the network packet integrity value. The transformed outbound network packet in then transmitted onto a network for delivery to the second virtual machine.Type: GrantFiled: September 30, 2020Date of Patent: July 19, 2022Inventors: Bogdan Cosmin Chifor, Andrei Ion Bunghez
-
Patent number: 8930630Abstract: The present disclosure relates to a cache memory controller for controlling a set-associative cache memory, in which two or more blocks are arranged in the same set, the cache memory controller including a content modification status monitoring unit for monitoring whether some of the blocks arranged in the same set of the cache memory have been modified in contents, and a cache block replacing unit for replacing a block, which has not been modified in contents, if some of the blocks arranged in the same set have been modified in contents.Type: GrantFiled: September 2, 2009Date of Patent: January 6, 2015Assignee: Sejong University Industry Academy Cooperation FoundationInventor: Gi Ho Park
-
Patent number: 8856278Abstract: A system and method for storage and retrieval of pervasive and mobile content is provided. System may be comprised of a controller and a plurality of storage devices. Plurality of storage devices may include a first storage device located in a first geographic location and a second storage device located in a second geographic location. The controller may be operably connected to each storage device. The controller may also be capable of locating a first storage device containing data and transferring the data between the first storage device and a second storage device. The second storage device may be capable of transferring data to a host, which may be operably connected to the second storage device.Type: GrantFiled: November 16, 2005Date of Patent: October 7, 2014Assignee: Netapp, Inc.Inventor: Manu Rohani
-
Patent number: 8769195Abstract: A save control section included in a storage apparatus continuously performs writeback by which a data group is read out from a plurality of storage sections of the storage apparatus and by which the data group is saved in a data group storage section of the storage apparatus, or staging by which a data group saved in the data group storage section is distributed and stored in the plurality of storage sections according to storage areas of the data group storage section which store a plurality of data groups. An output section of the storage apparatus outputs in block a data group including the data stored in each of the plurality of storage sections. The data group storage section has the storage areas for storing a data group.Type: GrantFiled: January 19, 2011Date of Patent: July 1, 2014Assignee: Fujitsu LimitedInventors: Hidenori Yamada, Takashi Kawada, Yoshinari Shinozaki, Shinichi Nishizono, Koji Uchida
-
Patent number: 8533425Abstract: A shared resource management system and method are described. In one embodiment, a shared resource management system facilitates age based miss replay. In one exemplary implementation, a shared resource management system includes a plurality of engines, and a shared resource a shared resource management unit. The plurality of engines perform processing. The shared resource supports the processing. The shared resource management unit handles multiple outstanding miss requests.Type: GrantFiled: November 1, 2006Date of Patent: September 10, 2013Assignee: Nvidia CorporationInventor: Lingfeng Yuan
-
Patent number: 8499117Abstract: A method for writing and reading data in memory cells, comprises the steps of: defining a virtual memory, defining write commands and read commands of data (DT) in the virtual memory, providing a first nonvolatile physical memory zone (A1), providing a second nonvolatile physical memory zone (A2), and, in response to a write command of an initial data, searching for a first erased location in the first memory zone, writing the initial data (DT1a) in the first location (PB1(DPP0)), and writing, in the metadata (DSC0) an information (DS(PB1)) allowing the first location to be found and an information (LPA, DS(PB1)) forming a link between the first location and the location of the data in the virtual memory.Type: GrantFiled: September 21, 2010Date of Patent: July 30, 2013Assignee: STMicroelectronics (Rousset) SASInventor: Hubert Rousseau
-
Patent number: 8495333Abstract: A system including a communication interface, a memory, and a processor. The communication interface is configured to receive data. The memory is divided into a first retention region and a second retention region, wherein the first retention region is configured to store data for a first predetermined period of time, and the second retention region is configured to store data for a second predetermined period of time. The processor is configured to i) initially store, within the first retention region of the memory, the data that is received, and ii) in response to the data that is received having been stored in the first retention region of the memory for a time limit that exceeds the first predetermined period of time, transfer the data that is received from the first retention region of the memory to the second retention region of the memory.Type: GrantFiled: July 13, 2012Date of Patent: July 23, 2013Assignee: Marvell International Ltd.Inventors: Mark Montierth, Randall Briggs, Douglas Keithley, David Bartle
-
Publication number: 20120303904Abstract: Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.Type: ApplicationFiled: May 21, 2012Publication date: November 29, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Keneth W. Todd
-
Patent number: 8312241Abstract: Within a serial buffer, request packets are written to available memory blocks of a memory buffer, which are identified by a free buffer pointer list. When a request packet is written to a memory block, the memory block is removed from the free buffer pointer list, and added to a used buffer pointer list. Memory blocks in the used buffer pointer list are read, thereby transmitting the associated request packets from the serial buffer. When a request packet is read from a memory block, the memory block is removed from the used buffer pointer list and added to a request buffer pointer list. If a corresponding response packet is received within a timeout period, the memory block is transferred from the request buffer pointer list to the free buffer pointer list. Otherwise, the memory block is transferred from the request buffer pointer list to the used buffer pointer list.Type: GrantFiled: March 6, 2008Date of Patent: November 13, 2012Assignee: Integrated Device Technology, inc.Inventors: Chi-Lie Wang, Jason Z. Mo
-
Patent number: 8230197Abstract: Imaging devices incorporating semi-volatile memory are described herein. According to various embodiments, a communication interface may receive image data that is stored in a NAND flash memory device divided into three regions. Other embodiments may be described and claimed.Type: GrantFiled: February 15, 2012Date of Patent: July 24, 2012Assignee: Marvell International Ltd.Inventors: Mark D. Montierth, Randall D. Briggs, Douglas G. Keithley, David A. Bartle
-
Patent number: 8122220Abstract: Imaging devices incorporating semi-volatile memory are described herein. According to various embodiments, a communication interface may receive image data that is stored in a semi-volatile NAND flash memory device divided into three regions. Other embodiments may be described and claimed.Type: GrantFiled: December 14, 2007Date of Patent: February 21, 2012Assignee: Marvell International Ltd.Inventors: Mark D. Montierth, Randall D. Briggs, Douglas G. Keithley, David A. Bartle
-
Patent number: 8117396Abstract: Methods and apparatuses provide a multi-level buffer cache having queues corresponding to different priority levels of queuing within the buffer cache. One or more data blocks are buffered in the buffer cache. In one embodiment, an initial level of queue is identified for a data block to be buffered in the buffer cache. The initial level of queue can be modified higher or lower depending on a value of a cache property associated with the data block. In one embodiment, the data block is monitored for data access in a queue, and the data block is aged and moved to higher level(s) of queuing based on rules for the data block. The rules can apply to the queue in which the data block is buffered, to a data type of the data block, or to a logical partition to which the data block belongs.Type: GrantFiled: October 10, 2006Date of Patent: February 14, 2012Assignee: Network Appliance, Inc.Inventors: Robert L. Fair, Matti A. Vanninen
-
Patent number: 7996621Abstract: According to embodiments of the invention, a step value and a step-interval cache coherency protocol may be used to update and invalidate data stored within cache memory. A step value may be an integer value and may be stored within a cache directory entry associated with data in the memory cache. Upon reception of a cache read request, along with the normal address comparison to determine if the data is located within the cache a current step value may be compared with the stored step value to determine if the data is current. If the step values match, the data may be current and a cache hit may occur. However, if the step values do not match, the requested data may be provided from another source. Furthermore, an application may update the current step value to invalidate old data stored within the cache and associated with a different step value.Type: GrantFiled: July 12, 2007Date of Patent: August 9, 2011Assignee: International Business Machines CorporationInventors: Jeffrey Douglas Brown, Russell Dean Hoover, Eric Oliver Mejdrich, Kenneth Michael Valk
-
Patent number: 7970997Abstract: A program section layout method capable of improving space efficiency of a cache memory. A grouping unit groups program sections into section groups so that the total size of the program sections composing each section group does not exceed cache memory size. A layout optimization unit optimizes the layout of section group storage regions by combining each section group and a program section that does not belong to any section groups or by combining section groups while keeping the ordering relations of the program sections composing each section group.Type: GrantFiled: December 23, 2004Date of Patent: June 28, 2011Assignee: Fujitsu Semiconductor LimitedInventor: Manabu Watanabe
-
Publication number: 20110066785Abstract: A memory management system and method include and use a cache buffer (such as a table look-aside buffer, TLB), a memory mapping table, a scratchpad cache, and a memory controller. The cache buffer is configured to store a plurality of data structures. The memory mapping table is configured to store a plurality of addresses of the data structures. The scratchpad cache is configured to store the base address of the data structures. The memory controller is configured to control reading and writing in the cache buffer and the scratchpad cache. The components are operable together under control of the memory controller to facilitate effective searching of the data structures in the memory management system.Type: ApplicationFiled: January 27, 2010Publication date: March 17, 2011Applicant: VIA TECHNOLOGIES, INC.Inventors: JIAN LI, JIIN LAI, SHAN-NA PANG, ZHI-QIANG HUI, DI DAI
-
Patent number: 7840751Abstract: Apparatus and method for command queue management of back watered requests. A selected request is released from a command queue, and further release of requests from the queue is interrupted when a total number of subsequently completed requests reaches a predetermined threshold.Type: GrantFiled: June 29, 2007Date of Patent: November 23, 2010Assignee: Seagate Technology LLCInventors: Clark Edward Lubbers, Robert Michael Lester
-
Publication number: 20100191925Abstract: A method, system, and computer program product for managing modified metadata in a storage controller cache pursuant to a recovery action by a processor in communication with a memory device is provided. A count of modified metadata tracks for a storage rank is compared against a predetermined criterion. If the predetermined criterion is met, a storage volume having the storage rank is designated with a metadata invalidation flag to defer metadata invalidation of the modified metadata tracks until after the recovery action is performed.Type: ApplicationFiled: January 28, 2009Publication date: July 29, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Lawrence Carter BLOUNT, Lokesh Mohan GUPTA, Carol Santich MELLGREN, Kenneth Wayne TODD
-
Publication number: 20100169588Abstract: A method and system writes data to a memory device including writing data to varying types of physical write blocks. The method includes receiving a request to write data for a logical block address within an LBA range to the memory device. Depending on whether the quantity of valid data in the memory device meets a predetermined criteria, the data is written to a specific chaotic block, a general chaotic block, or a mapped block. The mapped block is assigned for writing data for the LBA range, the specific chaotic block is assigned for writing data for contiguous LBA ranges including the LBA range, and the general chaotic block is assigned for writing data for any LBA range. Lower fragmentation and write amplification ratios may result by using this method and system.Type: ApplicationFiled: December 30, 2008Publication date: July 1, 2010Inventor: Alan W. Sinclair
-
Patent number: 7689776Abstract: Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well.Type: GrantFiled: June 6, 2005Date of Patent: March 30, 2010Assignees: Kabushiki Kaisha Toshiba, International Business Machines CorporationInventors: Takeki Osanai, Kimberly Fernsler
-
Patent number: 7664916Abstract: Methods and apparatuses are provided for use with smartcards or other like shared computing resources. A global smartcard cache is maintained on one or more computers to reduce the burden on the smartcard. The global smartcard cache data is associated with a freshness indicator that is compared to the current freshness indicator from the smartcard to verify that the cached item data is current.Type: GrantFiled: January 6, 2004Date of Patent: February 16, 2010Assignee: Microsoft CorporationInventors: Daniel C. Griffin, Eric C. Perlin, Klaus U. Schutz
-
Publication number: 20090228667Abstract: A method to perform a least recently used (LRU) algorithm for a co-processor is described, which co-processor in order to directly use instructions of a core processor and to directly access a main storage by virtual addresses of said core processor comprises a TLB for virtual to absolute address translations plus a dedicated memory storage also including said TLB, wherein said TLB consists of at least two zones which can be assigned in a flexible manner more than one at a time. Said method to perform a LRU algorithm is characterized in that one or more zones are replaced dependent on an actual compression service call (CMPSC) instruction.Type: ApplicationFiled: March 6, 2009Publication date: September 10, 2009Applicant: International Business Machines CorporationInventors: Thomas Koehler, Siegmund Schlechter
-
Publication number: 20090193205Abstract: A method of regeneration of a recording state of digital data stored in a node of a data network, the method including the steps of classifying files stored in the node, periodically writing a digital file from the node to a temporary memory, the temporary memory being a component of said node, and writing the digital file from the temporary memory to the same node.Type: ApplicationFiled: July 2, 2008Publication date: July 30, 2009Applicant: ATM S.A.Inventor: Jerzy Piotr Walczak
-
Publication number: 20090055595Abstract: Provided are a method, system, and article of manufacture for adjusting parameters used to prefetch data from storage into cache. Data units are added from a storage to a cache, wherein requested data from the storage is returned from the cache. A degree of prefetch is processed indicating a number of data units to prefetch into the cache. A trigger distance is processed indicating a prefetched trigger data unit in the cache. The number of data units indicated by the degree of prefetch is prefetched in response to processing the trigger data unit. The degree of prefetch and the trigger distance are adjusted based on a rate at which data units are accessed from the cache.Type: ApplicationFiled: August 22, 2007Publication date: February 26, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Binny Sher Gill, Luis Angel Daniel Bathen, Steven Robert Lowe, Thomas Charles Jarvis
-
Patent number: 7409461Abstract: Methods and systems consistent with the present invention provide broadband subscribers with dynamic, automatic routing information for accessing services offered by a service provider network (35) and/or content provider networks (40, 45). Client software on the subscriber's computer (10) retrieves a file such as an HyperText Markup Language (HTML) document from a predetermined server containing connection-oriented routing information for gaining access to various network services. The client software parses the HTML document and extracts the routing information. The client software then uses this routing information to populate and manipulate the subscriber computer (10) routing table.Type: GrantFiled: November 6, 2002Date of Patent: August 5, 2008Assignee: Efficient Networks, Inc.Inventors: Akkamapet P. Sundarraj, James R. Pickering, Douglas Moe, Melvin Paul Perinchery