Patents by Inventor David Trossell

David Trossell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10264105
    Abstract: A method comprises: initialising a value of an input transfer size parameter; initialising a value of a transfer data segment parameter; requesting data from a data source; storing the data in a cache; using the value of the transfer data segment parameter to form a data segment; transmitting the data segment using multiple logical connections of a transfer path deleting the data segment from the cache; measuring performance of transmission of data over the path and identifying a first optimum value of the transfer data segment parameter; measuring performance of transmission of data over the path and identifying a first optimum value of the input transfer size parameter; measuring performance of transmission of data over the path and identifying a second optimum value of the transfer data segment parameter; and requesting data from the data source using the first optimised value of the input transfer size parameter.
    Type: Grant
    Filed: May 26, 2016
    Date of Patent: April 16, 2019
    Assignee: Bridgework Limited
    Inventor: David Trossell
  • Publication number: 20180302503
    Abstract: A method comprises: initialising a value of an input transfer size parameter; initialising a value of a transfer data segment parameter; requesting data from a data source, the requesting using the value of the input transfer size parameter to indicate an amount of data requested; storing data received from the data source in a cache; using the value of the transfer data segment parameter to form a data segment from data stored in the cache; transmitting the data segment using multiple logical connections of a transfer path, each logical connection carrying a different part of the data segment; when it is confirmed that the data segment has been transmitted over the transfer path, deleting the data segment from the cache; measuring performance of transmission of data over the path using different values of the transfer data segment parameter and identifying a first optimum value of the transfer data segment parameter; whilst transmitting using the first optimum value of the transfer data segment parameter, me
    Type: Application
    Filed: May 26, 2016
    Publication date: October 18, 2018
    Inventor: David Trossell
  • Patent number: 10084699
    Abstract: Apparatus has at least one processor and at least one memory having computer-readable code stored therein which when executed controls the at least one processor to perform a method comprising: maintaining plural logical connections on a communications path; transmitting data packets on different ones of the logical connections; monitoring acknowledgements received in respect of the data packets transmitted over the different ones of the logical connections; reusing a logical connection for which an acknowledgement for a transmitted data packet has been received; creating a new logical connection when there is a data packet to transmit over the path and there are no logical connections available for reuse; and destroying excess logical connections. This can result in the maintenance and use of a number of logical connections that is most appropriate for the link conditions and the data transmission requirements, thereby potentially maximizing transmission speed and minimizing system resource requirements.
    Type: Grant
    Filed: November 28, 2014
    Date of Patent: September 25, 2018
    Assignee: Bridgeworks Limited
    Inventors: Paul Burgess, David Trossell
  • Patent number: 9954776
    Abstract: A system comprises: a first network node (3); a second network node (4); at least one communication path (702, 703) connecting the first network node to the second network node; a first cache (706) forming part of the first network node; and a second cache (705) forming part of the second network node, wherein: the first cache is configured to cache data received from a host; the first network node is configured to transmit the data from the first cache to the second network node over the at least one communication path; the second network node is configured to receive the data transmitted by the first network node and to store the received data in the second cache prior to providing the data from the second cache to a connected device; the second network node is configured to use operating parameters of the second cache to calculate a hunger parameter representative of the capability of the second cache to receive more data; the second network node is configured to communicate the hunger parameter to the
    Type: Grant
    Filed: November 28, 2014
    Date of Patent: April 24, 2018
    Assignee: Bridgeworks Limited
    Inventors: Paul Burgess, David Trossell
  • Patent number: 9729437
    Abstract: Apparatus comprises: first and second transmitter interfaces, each configured to transmit data over a respective communications path including one or more logical connections; first and second transmit buffers forming part of the first and second transmitter interfaces respectively, the first and second transmit buffers being configured to store packets of data for transmission over their respective communication path; one or more path capability determining modules configured to determine a measure of capability of each of the communications paths to transmit data; an input data buffer configured to store data for provision to the first and second transmit buffers for subsequent transmission; and a data handling module configured to respond to determining the presence of data in the input buffer for transmission by: using the measured capabilities of the communication paths and measures of the quantity of data stored in the transmit buffers to select one of the first and second transmitter interfaces for
    Type: Grant
    Filed: November 28, 2014
    Date of Patent: August 8, 2017
    Assignee: Bridgeworks Limited
    Inventors: Paul Burgess, David Trossell
  • Patent number: 9712437
    Abstract: Apparatus has at least one processor and at least one memory having computer-readable code stored therein which when executed controls the at least one processor to perform a method comprising: causing each of first and second transmitter interfaces to transmit data over a respective communications path including one or more logical connections; and causing each of first and second transmission parameter calculating modules, associated respectively with the first and second transmitter interfaces, to perform: monitoring the transmission of data over its respective communications path, using results of monitoring the transmission of data over its respective communications path to calculate a path speed value for transmitting data over its respective communications path, and causing the path speed value to be used in the transmission of data over its respective communications path.
    Type: Grant
    Filed: November 28, 2014
    Date of Patent: July 18, 2017
    Assignee: Bridgeworks Limited
    Inventors: Paul Burgess, David Trossell
  • Publication number: 20170019333
    Abstract: Apparatus comprises: first and second transmitter interfaces (702, 711), each configured to transmit data over a respective communications path (702, 703) including one or more logical connections; first and second transmit buffers forming part of the first and second transmitter interfaces respectively, the first and second transmit buffers being configured to store packets of data for transmission over their respective communication path; one or more path capability determining modules (709, 713) configured to determine a measure of capability of each of the communications paths to transmit data; an input data buffer (706) configured to store data for provision to the first and second transmit buffers for subsequent transmission; and a data handling module (704) configured to respond to determining the presence of data in the input buffer for transmission by: using the measured capabilities of the communication paths and measures of the quantity of data stored in the transmit buffers to select one of the fi
    Type: Application
    Filed: November 28, 2014
    Publication date: January 19, 2017
    Applicant: Bridgeworks Limited
    Inventors: Paul BURGESS, David TROSSELL
  • Publication number: 20170019332
    Abstract: Apparatus has at least one processor and at least one memory having computer-readable code stored therein which when executed controls the at least one processor to perform a method comprising: maintaining plural logical connections on a communications path; transmitting data packets on different ones of the logical connections; monitoring acknowledgements received in respect of the data packets transmitted over the different ones of the logical connections; reusing a logical connection for which an acknowledgement for a transmitted data packet has been received; creating a new logical connection when there is a data packet to transmit over the path and there are no logical connections available for reuse; and destroying excess logical connections. This can result in the maintenance and use of a number of logical connections that is most appropriate for the link conditions and the data transmission requirements, thereby potentially maximizing transmission speed and minimizing system resource requirements.
    Type: Application
    Filed: November 28, 2014
    Publication date: January 19, 2017
    Applicants: Bridgeworks Limited, Bridgeworks Limited
    Inventors: Paul Burgess, David Trossell
  • Publication number: 20160269238
    Abstract: A system comprises: a first network node (3); a second network node (4); at least one communication path (702, 703) connecting the first network node to the second network node; a first cache (706) forming part of the first network node; and a second cache (705) forming part of the second network node, wherein: the first cache is configured to cache data received from a host; the first network node is configured to transmit the data from the first cache to the second network node over the at least one communication path; the second network node is configured to receive the data transmitted by the first network node and to store the received data in the second cache prior to providing the data from the second cache to a connected device; the second network node is configured to use operating parameters of the second cache to calculate a hunger parameter representative of the capability of the second cache to receive more data; the second network node is configured to communicate the hunger parameter to the
    Type: Application
    Filed: November 28, 2014
    Publication date: September 15, 2016
    Inventors: Paul Burgess, David Trossell
  • Publication number: 20160261503
    Abstract: Apparatus has at least one processor and at least one memory having computer-readable code stored therein which when executed controls the at least one processor to perform a method comprising: causing each of first and second transmitter interfaces to transmit data over a respective communications path including one or more logical connections; and causing each of first and second transmission parameter calculating modules, associated respectively with the first and second transmitter interfaces, to perform: monitoring the transmission of data over its respective communications path, using results of monitoring the transmission of data over its respective communications path to calculate a path speed value for transmitting data over its respective communications path, and causing the path speed value to be used in the transmission of data over its respective communications path.
    Type: Application
    Filed: November 28, 2014
    Publication date: September 8, 2016
    Inventors: Paul Burgess, David Trossell
  • Publication number: 20100111095
    Abstract: A bridging system, comprising bridges 3, 4 and network 5, is arranged to transfer data using TCP/IP or similar between a local Storage Area Network (SAN) 1 and a remote SAN 2. In one embodiment, the bridge 3 is arranged to transfer data from a plurality of ports 12-1˜12-n in a periodic sequence. While an acknowledgement from SAN 2 for data transferred from one port 12-1 data is awaited, further data can be transferred using one or more of the remaining ports 12-2˜12-n. In other embodiments, one or more parameters, such as number of ports, Receive Window Size etc., can be optimised using artificial intelligence (AI) routines in order to control the data transfer rate between the bridges 3, 4. The bridging system may be configured to perform a self-learning routine on installation and, in some embodiments, to compile and consult a knowledge base storing optimum configurations for transferring data packets having different attributes by simulating data transfers.
    Type: Application
    Filed: November 3, 2008
    Publication date: May 6, 2010
    Applicant: Bridgeworks Limited
    Inventors: David Trossell, Lewis Hibell
  • Patent number: 7653774
    Abstract: A bridge includes first and second network connections, and processor means and memory together operating to implement plural software modules. These allow data to be passed between the network connections and allow the data to be handled as it passes. Each software modules has a priority, either pre-set in software or settable by the bridge in response to receiving a command from a device connected to the bridge. The bridge is operable to arrange the software modules sequentially between the network connections, such that data provided by a software module is received at the next software module in the sequence, according to their priorities. Software modules can be added to or removed from the sequence. This can be carried out dynamically, for instance by the bridge following a determination from monitoring of data flow that such would improve performance.
    Type: Grant
    Filed: December 12, 2006
    Date of Patent: January 26, 2010
    Assignee: Bridgeworks Limited
    Inventors: Paul Burgess, Steven Hayter, Darren Hayward, David Trossell
  • Patent number: 7421312
    Abstract: A library partitioning module (LPM) is configured to connect multiple hosts to a tape library. The LPM comprises at least two host input/outputs, each for connection to a respective host; and a library input/output for connection to the library. The LPM is an interface between hosts and a tape library such that it allow several hosts to access the library as if they have sole use of the library's resources. The modification of requests includes intercepting SCSI messages between robot and host and translating/transforming them. At least one tape slot for each host is allocated as a virtual import/export location. When a host sends a command to move a tape between a driver or a tape slot and an import/export location, the request is modified by the LPM so that movement occurs instead between a virtual import/export slot allocated to that host. This avoids conflicts involving the import/export location resources of the library.
    Type: Grant
    Filed: May 20, 2004
    Date of Patent: September 2, 2008
    Assignee: Bridgeworks Limited
    Inventor: David Trossell
  • Publication number: 20070174470
    Abstract: In a bridge, a cache module is operable on receipt of a message pertaining to a write or other cacheable command from an initiator device to process the command so as to cause a suitable command to be passed to the relevant target device and to respond immediately to the initiator device with a ‘response good’ response. The ‘response good’ response is sent to the initiator device before the corresponding response pertaining to the cacheable command is received from the target device. When the cache module receives an error response from a target device, the cache module converts the response into a ‘deferred error’ response and passes this onwards to the initiator device. Since the initiator device receives a positive response sooner, it can send a subsequent command sooner and thus performance is increased.
    Type: Application
    Filed: December 12, 2006
    Publication date: July 26, 2007
    Applicant: Bridgeworks Limited
    Inventors: Paul Burgess, Steven Hayter, Darren Hayward, David Trossell
  • Publication number: 20070156926
    Abstract: A bridge includes first and second network connections, and processor means and memory together operating to implement plural software modules. These allow data to be passed between the network connections and allow the data to be handled as it passes. Each software modules has a priority, either pre-set in software or settable by the bridge in response to receiving a command from a device connected to the bridge. The bridge is operable to arrange the software modules sequentially between the network connections, such that data provided by a software module is received at the next software module in the sequence, according to their priorities. Software modules can be added to or removed from the sequence. This can be carried out dynamically, for instance by the bridge following a determination from monitoring of data flow that such would improve performance.
    Type: Application
    Filed: December 12, 2006
    Publication date: July 5, 2007
    Applicant: Bridgeworks Limited
    Inventors: Paul Burgess, Steven Hayter, Darren Hayward, David Trossell
  • Publication number: 20050013149
    Abstract: A library partitioning module (LPM) is configured to connect multiple hosts to a tape library. The LPM comprises at least two host input/outputs, each for connection to a respective host; and a library input/output for connection to the library. The LPM is an interface between hosts and a tape library such that it allow several hosts to access the library as if they have sole use of the library's resources. The modification of requests includes intercepting SCSI messages between robot and host and translating/transforming them. At least one tape slot for each host is allocated as a virtual import/export location. When a host sends a command to move a tape between a driver or a tape slot and an import/export location, the request is modified by the LPM so that movement occurs instead between a virtual import/export slot allocated to that host. This avoids conflicts involving the import/export location resources of the library.
    Type: Application
    Filed: May 20, 2004
    Publication date: January 20, 2005
    Inventor: David Trossell