Patents by Inventor Sandip Chattopadhya

Sandip Chattopadhya has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20090213843
    Abstract: An embodiment of the invention is directed to a method, wherein it is received, at an origin device, input from a first telephonic device via an origin telephonic landline. An initiation request is then output based on the input to a destination device, wherein the destination device is configured to output a call request to a second telephonic device via a destination telephonic landline. Another embodiment of the invention is directed to a method, including receiving, at a server, a request from an origin device, wherein the origin device is configured to receive input from a first telephonic device via an origin telephonic landline. Information is then output, based on the input, regarding a destination device to the origin device, wherein the destination device is configured to output a call request to a second telephonic device via a destination telephonic landline.
    Type: Application
    Filed: February 20, 2009
    Publication date: August 27, 2009
    Inventors: Sandip Chattopadhya, Magdalena Kruk-Chattopadhya, Harminder Sandhu, Elbert Shiang, Atiq Raza
  • Publication number: 20020126690
    Abstract: The present invention is directed toward methods and apparatus for packet transmission scheduling in a data communication network. In one aspect, received data packets are assigned to an appropriate one of a plurality of scheduling heap data structures. Each scheduling heap data structure is percolated to identify a most eligible data packet in each heap data structure. A highest-priority one of the most-eligible data packets is identifying by prioritizing among the most-eligible data packets. This is useful because the scheduling tasks may be distributed among the hierarchy of schedulers to efficiently handle data packet scheduling. Another aspect provides a technique for combining priority schemes, such as strict priority and weighted fair queuing. This is useful because packets may have equal priorities or no priorities, such as in the case of certain legacy equipment.
    Type: Application
    Filed: February 26, 2002
    Publication date: September 12, 2002
    Applicant: Maple Optical Systems, Inc.
    Inventors: Pidugu Narayana, Makarand Dharmapurikar, Sandip Chattopadhya
  • Publication number: 20020085545
    Abstract: A non-blocking virtual switch architecture for a data communication network. The switch includes a plurality of input ports and output ports. Each input port may be connected to each output port by a directly connected network or by a mesh network. Thus, data packets may traverse the switch simultaneously with other packets. At each output port, buffer space is dedicated for queuing packets received from each of the input ports. An arbitration scheme is utilized to forward data from the buffers to the network. Accordingly, the use of a crossbar array, and associated traffic bottlenecks, are avoided. Rather, the system advantageously provides separate buffer space at each output port for every input port.
    Type: Application
    Filed: October 9, 2001
    Publication date: July 4, 2002
    Applicant: Maple Optical Systems, Inc.
    Inventors: Ed Ku, Piyush Kothary, Sandip Chattopadhya, Steffen Hagene
  • Publication number: 20020085567
    Abstract: A metro switch and method for transporting data configured according to multiple different formats. In one aspect, a network system and method that provides for point-to-point communication of data of various different formats such as ATM, frame relay, PPP, Ethernet, etc. Accordingly, the invention may interface disparate network devices, such as private networks and other entities that operate according to various different protocols and that use various different media. At ingress points to the system, the data is received from data sources and configured according to a universal format. This allows data from origins that use different data formats and/or transmission media to be mixed and transported onto the same media. The data is then transported to one or more destinations using this format. At egress points of the system, the data is reconverted to its original format for use at its destination.
    Type: Application
    Filed: October 9, 2001
    Publication date: July 4, 2002
    Applicant: Maple Optical Systems, Inc.
    Inventors: Ed Ku, Piyush Kothary, Sandip Chattopadhya
  • Publication number: 20020085565
    Abstract: A technique for time division multiplex (TDM) forwarding of data streams. The system uses a common switch fabric resource for TDM and packet switching. In operation, large packets or data streams are divided into smaller portions upon entering a switch. Each portion is assigned a high priority for transmission and a tracking header for tracking it through the switch. Prior to exiting the switch, the portions are reassembled into the data stream. Thus, the smaller portions are passed using a “store-and-forward” technique. Because the portions are each assigned a high priority, the data stream is effectively “cut-through” the switch. That is, the switch may still be receiving later portions of the stream while the switch is forwarding earlier portions of the stream. This technique of providing “cut-through” using a store-and-forward switch mechanism reduces transmission delay and buffer over-runs that otherwise would occur in transmitting large packets or data streams.
    Type: Application
    Filed: October 9, 2001
    Publication date: July 4, 2002
    Applicant: Maple Optical Systems, Inc.
    Inventors: Ed Ku, Piyush Kothary, Sandip Chattopadhya, Ramesh Yarlagadda
  • Publication number: 20020085548
    Abstract: A quality of service technique for a data communication network. Using a combination of Time Division Multiplexing (TDM) and packet switching, the system is configured to guarantee a predefined bandwidth for a client, which, in turn, helps manage delay and jitter in the data transmission. An ingress processor operates as a bandwidth filter, transmitting packet bursts to distribution channels for queuing in a queuing engine. The queuing engine holds the data packets for subsequent scheduled transmission over the network, which is governed by predetermined priorities. These priorities may be established by several factors including pre-allocated bandwidth, system conditions and other factors. A scheduler then transmits the data received by the queuing engine by a self-clocked fair queuing method.
    Type: Application
    Filed: October 9, 2001
    Publication date: July 4, 2002
    Applicant: Maple Optical Systems, Inc.
    Inventors: Ed Ku, Piyush Kothary, Sandip Chattopadhya
  • Publication number: 20020085507
    Abstract: An address learning technique in a data communication network. A packet may be received by the network system when the ingress equipment does not yet have information to lookup the appropriate path for the packet based upon its destination media access control (MAC) address. The packet may then be broadcast or multicast to all possible or probable destinations for the packet. Each such destination adds an entry to its lookup table using the source MAC address and an identification of the path by which it received the packet. Then, when one of those destinations (acting as ingress equipment) receive a packet intended for that MAC address, it will have the necessary information in its lookup table to send the packet along the appropriate path. Thus, table entries are formed and stored in the remote destinations rather than in the ingress equipment.
    Type: Application
    Filed: October 9, 2001
    Publication date: July 4, 2002
    Applicant: Maple Optical Systems, Inc.
    Inventors: Ed Ku, Sandip Chattopadhya
  • Patent number: 5134702
    Abstract: A serial-to-parallel and parallel-to-serial data format converter has a plurality of first-in, first-out (FIFO) buffer memory devices, an input circuit for receiving serial data bits, an output circuit for outputting serial data bits and a clocking circuit for clocking selected ones of the data bits into and out of selected ones of the FIFO buffer memory devices. The clocking circuit clocks serial data bits either into or out of each of the FIFO buffer memory devices at a rate slower than the rate of the receipt of the serial data bits by the input circuit, or the rate of the outputting of serial data bits by the output circuit, respectively.
    Type: Grant
    Filed: April 21, 1986
    Date of Patent: July 28, 1992
    Assignee: NCR Corporation
    Inventors: Harold Charych, Sandip Chattopadhya
  • Patent number: 4722051
    Abstract: A data processing system has a plurality of peripheral devices and a main memory, a direct memory access controller for controlling the transfer of data between the main memory and the peripheral devices including a local memory connected to the peripheral devices for storing data written to and read from the peripheral devices, a sequencer for controlling the transfer of data between the main memory and the local memory, a local address register connected to the sequencer for providing the local memory address for memory operations of the local memory, a system address register connected to the sequencer for providing the main memory address for memory operations of the main memory, and a data register for holding data transferred between the main memory and the local memory.
    Type: Grant
    Filed: July 26, 1985
    Date of Patent: January 26, 1988
    Assignee: NCR Corporation
    Inventor: Sandip Chattopadhya