Patents by Inventor Sumit Roy

Sumit Roy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20030229761
    Abstract: A computer system is provided including a processor, a persistent storage device, and a main memory connected to the processor and the persistent storage device. The main memory includes a compressed cache for storing data retrieved from the persistent storage device after compression and an operating system. The operating system includes a plurality of interconnected software modules for accessing the persistent storage device and a filter driver interconnected between two of the plurality of software modules for managing memory capacity of the compressed cache and the buffer cache.
    Type: Application
    Filed: June 10, 2002
    Publication date: December 11, 2003
    Inventors: Sujoy Basu, Sumit Roy, Rajendra Kumar
  • Patent number: 6654867
    Abstract: A method and system for parallel fetch and decompression of compressed data blocks is disclosed. A method first accesses a table of pointers specifying the location of compressed data to obtain a pointer. Using the pointer, the method reads a pointer in the first block of data, the pointer specifying the location of the next block of compressed data in a chain of compressed data blocks. The method also transfers the rest of the first compressed data block to be decompressed. The method then fetches the next compressed data block using the second pointer while decompressing the first compressed data block. Using a pointer in each successive compressed data block in the chain, the method pre-fetches the next compressed data block while the previous compressed data block is being decompressed.
    Type: Grant
    Filed: May 22, 2001
    Date of Patent: November 25, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Kenneth Mark Wilson, Robert Bruce Aglietti, Sumit Roy
  • Publication number: 20030188121
    Abstract: In a method for optimizing performance in a memory system, a data structure configured to provide at least one free block of memory is received in the memory system. At least one bucket of memory is released in a swap device of the memory system corresponding to at least one free block of memory provided by the data structure.
    Type: Application
    Filed: March 27, 2002
    Publication date: October 2, 2003
    Inventors: Sumit Roy, Kenneth Mark Wilson
  • Publication number: 20030161401
    Abstract: A method and system for reducing the resolution of media data. Input data at a first resolution are received from a source. The input data are compressed. The input data can be downsampled to generate compressed downsampled data at a reduced resolution. The compressed downsampled data can be used to generate a frame at the reduced resolution. When the frame is needed as a reference for another frame, the compressed downsampled data can be decoded to generate decompressed downsampled data at the reduced resolution. The decompressed downsampled data can be upsampled to generate decompressed data at a resolution corresponding to the first resolution. Thus, a larger amount of data can be processed while the data are compressed. As such, data processing operations such as transcoding can be accomplished quickly and effectively while saving computing resources.
    Type: Application
    Filed: February 27, 2002
    Publication date: August 28, 2003
    Inventors: Bo Shen, Sumit Roy
  • Publication number: 20030137947
    Abstract: A method for hand-off of a data session in a server. Data is received from a data source. At least a portion of the data is transmitted to an electronic device located in a first location. Notification is received that the electronic device is moving toward a second location. A first message is transmitted to a second server notifying the second server that the electronic device is moving toward the second location, wherein the second server is located proximate to the second location. A second message is received from the second server that the second server is prepared to communicate with the electronic device. The server then stops transmission of the data.
    Type: Application
    Filed: January 23, 2002
    Publication date: July 24, 2003
    Inventors: Sumit Roy, Bo Shen, Vijay Sundaram
  • Patent number: 6516397
    Abstract: A method of operating a data processing system having a main memory divided into memory pages that are swapped into and out of main memory when the main memory becomes short. The data processing system has an operating system that sends page store commands specifying memory pages to be stored in a swap file and page retrieve commands specifying memory pages to be retrieved from the swap file and stored in the main memory. The present invention provides a swap driver that utilizes compression code for converting one of the memory pages that is to be swapped out of main memory to a compressed memory page. The data processing memory includes a compressed page region that is used to store the compressed memory pages. A page table in the compressed page region specifies the location of each compressed page and the page address corresponding to that page.
    Type: Grant
    Filed: April 9, 2001
    Date of Patent: February 4, 2003
    Assignee: Hewlett-Packard Company
    Inventors: Sumit Roy, Rajendra Kumar, Milos Prvulovic, Kenneth Mark Wilson
  • Publication number: 20030009589
    Abstract: A method for assigning servers to provide multiple description bitstreams to a mobile client (in a mobile client environment) or to a fixed client (in a fixed client environment). In one embodiment, the present invention, upon receiving a request from a mobile client to have media data streamed thereto, analyzes a plurality of servers to determine a first candidate server for providing a first multiple description bitstream to the base station along a first path. The present method also determines a second candidate server for providing a second multiple description bitstream to the base station along a second path. The present method then sends a request to the first candidate server to provide the first multiple description bitstream to a mobile client through a base station along the first path, and also sends a request to the second candidate server to provide the second multiple description bitstream to the mobile client through the same base station along a second path.
    Type: Application
    Filed: July 3, 2001
    Publication date: January 9, 2003
    Inventors: John G. Apostolopoulos, Sujoy Basu, Gene Cheung, Rajendra Kumar, Sumit Roy, Wai-tan Tan, Susie J. Wee, Tina Wong, Bo Shen
  • Publication number: 20030007515
    Abstract: A fixed client and a mobile client for receiving multiple description media streams. In one embodiment, the client comprises a multiple description receiving portion which is adapted to receive a plurality of multiple description bitstreams. The client includes memory coupled to the multiple description receiving portion for storing the plurality of multiple description bitstreams in respective portions thereof. The client of the present embodiment also includes a synchronization module coupled to the memory for blending the plurality of multiple description bitstreams. In one embodiment, a decoder is coupled to the synchronization module for decoding the plurality of multiple description bitstreams. A source control module for determining appropriate operation characteristics of the client is also coupled to the synchronization module. Also, a user interface device is coupled to the decoder to present to a user, media previously encoded into the plurality of multiple description bitstreams.
    Type: Application
    Filed: July 3, 2001
    Publication date: January 9, 2003
    Inventors: John G. Apostolopoulos, Sujoy Basu, Gene Cheung, Rajendra Kumar, Sumit Roy, Wai-Tan Tan, Susie J. Wee, Tina Wong, Bo Shen
  • Publication number: 20030009578
    Abstract: A method and system for streaming media data to a fixed client and/or a mobile client. In one method embodiment, the present invention recites encoding media data to be streamed to a client into a first multiple description bitstream and into a second multiple description bitstream. The present embodiment then recites distributing the first and second multiple description bitstreams to a plurality of servers placed at intermediate nodes throughout a network such that a client is provided with access to the media data via a plurality of transmission paths.
    Type: Application
    Filed: July 3, 2001
    Publication date: January 9, 2003
    Inventors: John G. Apostolopoulos, Sujoy Basu, Gene Cheung, Raj Kumar, Sumit Roy, Bo Shen, Wai-Tian Tan, Susie J. Wee, Tina Wong
  • Publication number: 20030009535
    Abstract: A method and system for streaming media data to a fixed client and/or a mobile client. In one method embodiment, the present invention recites encoding media data to be streamed to a client into a first multiple description bitstream and into a second multiple description bitstream. The present method then determines the appropriate plurality of servers from a network of servers onto which the first and second multiple description bitstreams should be distributed. The present embodiment then recites distributing the first and second multiple description bitstreams to the appropriate plurality of servers positioned at intermediate nodes throughout a network such that a client is provided with access to the media data via a plurality of transmission paths. The present method is also well suited to redistribution of multiple description bitstreams to servers based upon time-varying demand, client movement, and the like.
    Type: Application
    Filed: July 3, 2001
    Publication date: January 9, 2003
    Inventors: John G. Apostolopulos, Sujoy Basu, Gene Cheung, Raj Kumar, Sumit Roy, Bo Shen, Wai-Tan Tan, Susie J. Wee, Tina Wong
  • Publication number: 20030009576
    Abstract: A method for performing a soft-handoff in a mobile streaming media system, and a method for performing a hard-handoff in a mobile streaming media system are is disclosed. In the soft-handoff embodiment, the present invention detects that a channel quality between a mobile client and a first base station remains above a drop threshold and that a channel quality between the mobile client and a second base station increases from below to above an add threshold. The present embodiment then sends a first multiple description bitstream from the first base station to the mobile client and sends a complementary second multiple description bitstream from the second base station to the mobile client. This method thereby provides improved utilization of wireless bandwidth during soft-handoffs, in contrast to conventional systems where the same bitstream is transmitted from each base station.
    Type: Application
    Filed: July 3, 2001
    Publication date: January 9, 2003
    Inventors: John G. Apostolopoulos, Sujoy Basu, Gene Cheung, Rajendra Kumar, Sumit Roy, Wai-Tan Tan, Susie J. Wee, Tina Wong, Bo Shen
  • Publication number: 20030009577
    Abstract: A method for handing off to a second server, in either a fixed or mobile streaming media system, a multiple description streaming session between a first server and either a fixed or mobile client. In one embodiment, the present invention recites selecting a second server to receive a handoff of a multiple description streaming media session between the first server and the client. In this embodiment, the multiple description streaming media session is comprised of a first multiple description bitstream and a second multiple description bitstream. The present embodiment further recites receiving at the second server, the second multiple description bitstream for streaming to the client. This embodiment further recites sending the second multiple description bitstream from the second server to the client.
    Type: Application
    Filed: July 3, 2001
    Publication date: January 9, 2003
    Inventors: John G. Apostolopoulos, Sujoy Basu, Gene Cheung, Rajendra Kumar, Sumit Roy, Wai-tan Tan, Susie J. Wee, Tina Wong, Bo Shen
  • Publication number: 20020178332
    Abstract: A method and system for parallel fetch and decompression of compressed data blocks is disclosed. A method first accesses a table of pointers specifying the location of compressed data to obtain a pointer. Using the pointer, the method reads a pointer in the first block of data, the pointer specifying the location of the next block of compressed data in a chain of compressed data blocks. The method also transfers the rest of the first compressed data block to be decompressed. The method then fetches the next compressed data block using the second pointer while decompressing the first compressed data block. Using a pointer in each successive compressed data block in the chain, the method pre-fetches the next compressed data block while the previous compressed data block is being decompressed.
    Type: Application
    Filed: May 22, 2001
    Publication date: November 28, 2002
    Inventors: Kenneth Mark Wilson, Robert Bruce Aglietti, Sumit Roy
  • Publication number: 20020147893
    Abstract: A method of operating a data processing system having a main memory divided into memory pages that are swapped into and out of main memory when the main memory becomes short. The data processing system has an operating system that sends page store commands specifying memory pages to be stored in a swap file and page retrieve commands specifying memory pages to be retrieved from the swap file and stored in the main memory. The present invention provides a swap driver that utilizes compression code for converting one of the memory pages that is to be swapped out of main memory to a compressed memory page. The data processing memory includes a compressed page region that is used to store the compressed memory pages. A page table in the compressed page region specifies the location of each compressed page and the page address corresponding to that page.
    Type: Application
    Filed: April 9, 2001
    Publication date: October 10, 2002
    Inventors: Sumit Roy, Rajendra Kumar, Milos Prvulovic, Kenneth Mark Wilson
  • Publication number: 20020112121
    Abstract: The location of large caches of memory located at the network server platform can reduce traffic on the network trunks or Internet backbone. In some instances these memory caches might be located at the facilities management platform. Those users supported on a specific network server platform no longer would be required to download regularly used information from the Internet backbone minimizing congestion on the network. These memory caches can be supplemented or refreshed with new data on a regular basis based on the requirements or changing requirements of the users. The close location of regularly accessed data allows for faster downloads and minimizes congestion on the communication network. In addition to user requested information, push information can be stored in these caches for fast downloading to the users.
    Type: Application
    Filed: April 16, 2002
    Publication date: August 15, 2002
    Applicant: AT&T Corp.
    Inventors: Irwin Gerszberg, Kenny Xiaojian Huang, Christopher K. Kwabi, Sumit Roy, Gabriel Valdez
  • Patent number: 6385693
    Abstract: The location of large caches of memory located at the network server platform can reduce traffic on the network trunks or Internet backbone. In some instances these memory caches might be located at the facilities management platform. Those users supported on a specific network server platform no longer would be required to download regularly used information from the Internet backbone minimizing congestion on the network. These memory caches can be supplemented or refreshed with new data on a regular basis based on the requirements or changing requirements of the users. The close location of regularly accessed data allows for faster downloads and minimizes congestion on the communication network. In addition to user requested information, push information can be stored in these caches for fast downloading to the users.
    Type: Grant
    Filed: December 31, 1997
    Date of Patent: May 7, 2002
    Assignee: AT&T Corp.
    Inventors: Irwin Gerszberg, Kenny Xiaojian Huang, Christopher K. Kwabi, Sumit Roy, Gabriel Valdez
  • Patent number: 6269101
    Abstract: This invention provides a network server platform forming part of a new local loop network architecture designed to overcome the limitations of current art local access loop technologies. This invention allows end users to seamlessly connect to the numerous disparate networks in order to access the multiplicity of services that these networks have to offer. The network server platform allows interconnection between networks with varying networking protocols. The network server platform is a key component of the new architecture and interacts to allow for easy and seamless integration with network components on both the local access level as well as the core network. The network server platform offers external networking capabilities to the local access network. As a result, the local access network terminates on the network server platform. The network server platform provides subscribers or end users the capabilities to access services from a multiplicity of disparate networks offering a variety of services.
    Type: Grant
    Filed: February 23, 2000
    Date of Patent: July 31, 2001
    Assignee: AT&T Corporation
    Inventors: Irwin Gerszberg, Kenny Xiaojian Huang, Christopher K. Kwabi, Sumit Roy
  • Patent number: 6229810
    Abstract: This invention provides a network server platform forming part of a new local loop network architecture designed to overcome the limitations of current art local access loop technologies. This invention allows end users to seamlessly connect to the numerous disparate networks in order to access the multiplicity of services that these networks have to offer. The network server platform allows interconnection between networks with varying networking protocols. The network server platform is a key component of the new architecture and interacts to allow for easy and seamless integration with network components on both the local access level as well as the core network. The network server platform offers external networking capabilities to the local access network. As a result, the local access network terminates on the network server platform. The network server platform provides subscribers or end users the capabilities to access services from a multiplicity of disparate networks offering a variety of services.
    Type: Grant
    Filed: December 31, 1997
    Date of Patent: May 8, 2001
    Assignee: AT&T CORP
    Inventors: Irwin Gerszberg, Kenny Xiaojian Huang, Christopher K. Kwabi, Sumit Roy, Gabriel Valdez
  • Patent number: 6023566
    Abstract: Provided are a method, article of manufacture, and apparatus for matching candidate clusters to cells in a technology library. An automated design system comprises a computer configured to use second order signatures in generating candidate permutations of each permutation group in a canonical form of the candidate function. The system selects first and second symmetric subgroups, determines a second order signature for the candidate function and the first and second symmetric subgroups, and compares the second order signature to a corresponding second order signature for a library cell function. If the signatures match, the permutation is continued with the first and second symmetric subgroups being included in an intermediate permutation. If not, the system produces no more intermediate permutations beginning with the first and second symmetric subgroups. Further symmetric subgroups are added to the intermediate permutation.
    Type: Grant
    Filed: April 14, 1997
    Date of Patent: February 8, 2000
    Assignee: Cadence Design Systems
    Inventors: Krishna Belkhale, Sumit Roy, Devadas Varma
  • Patent number: 5991524
    Abstract: Provided are a method, article of manufacture, and apparatus for identifying candidate clusters for matching to cells in a technology library. An automated design system comprises a computer configured to extract a portion of a circuit, levelize it, select a first node, identify the realizable clusters at the inputs of the first node, and combine the first node with realizable clusters at the inputs to produce candidate clusters. A dummy cluster is used at each input to represent using the input as a fanin. The system takes the cross product of the sets, and the first node is merged with each element of the cross product to produce a set of candidate clusters. The candidate clusters are then checked for realizability by comparing them to cells in the technology library, which includes dummy cells to facilitate mapping to large cells in the technology library. A set of realizable clusters is produced for the first node.
    Type: Grant
    Filed: April 14, 1997
    Date of Patent: November 23, 1999
    Assignee: Cadence Design Systems
    Inventors: Krishna Belkhale, Sumit Roy, Devadas Varma