Patents by Inventor Scott S. McDaniel

Scott S. McDaniel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954379
    Abstract: The location of a printing device is determined. Environmental conditions in which the printing device operated when printing a print job are determined based on the determined location. An environmental adjustment factor for the printing device is determined based on the determined environmental conditions. A predicted print material usage of the printing device in printing the print job is adjusted based on the determined environmental adjustment factor.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 9, 2024
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jeffrey H. Luke, Gabriel S. McDaniel, Scott K. Hymas
  • Patent number: 11927905
    Abstract: In response to detecting that a remaining supply of print material of a first cartridge has reached a first threshold, a printing device transmits a request. In response to detecting that a second cartridge has replaced the first cartridge, the printing device determines whether the remaining supply had reached a second threshold and whether a token permitting usage of the second cartridge was received responsive to the request. In response to determining that the remaining supply had reached the second threshold and that the token was received, the printing device prints with print material from the second cartridge.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 12, 2024
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Jeffrey H. Luke, Gabriel S. McDaniel, Scott K Hymas
  • Publication number: 20240070421
    Abstract: The number of days until a consumable item of a printing device reaches end of life is predicted based on a usage scenario of the printing device. An expected fulfillment time period is subtracted from the predicted number of days to determine a number of days until a fulfillment event occurs. A threshold remaining life of the consumable item is correlated with the number of days until the fulfillment event occurs. That the remaining life of the consumable item has reached the threshold remaining life is detected, and fulfillment of a replacement consumable item to replace the consumable item is responsively initiated.
    Type: Application
    Filed: January 15, 2021
    Publication date: February 29, 2024
    Inventors: Scott K. Hymas, Jeffrey H. Luke, Gabriel S. McDaniel
  • Patent number: 9219683
    Abstract: Systems and methods that provide a unified infrastructure over layer-2 networks are provided. A first frame is generated by an end point. The first frame comprises a proxy payload, a proxy association header and a frame header relating to a control proxy element. The first frame is sent over a first network to the control proxy element. A second frame is generated by the control proxy element. The second frame comprises the proxy payload and a proxy header. The first and second frames correspond to different layer-2 protocols. The control proxy element sends the second frame over a second network employing the layer-2 protocol of the second frame.
    Type: Grant
    Filed: April 8, 2013
    Date of Patent: December 22, 2015
    Assignee: Broadcom Corporation
    Inventors: Uri El Zur, Kan Frankie Fan, Scott S. McDaniel, Murali Rajagopal
  • Patent number: 9015467
    Abstract: Methods and associated systems are disclosed for providing secured data transmission over a data network. Data to be encrypted and encryption information may be sent to a security processor via a packet network so that the security processor may extract the encryption information and use it to encrypt the data. The encryption information may include flow information, security association and/or other cryptographic information, and/or one or more addresses associated with such information. The encryption information may consist of a tag in a header that is appended to packets to be encrypted before the packets are sent to the security processor. The packet and tag header may be encapsulated into an Ethernet packet and routed via an Ethernet connection to the security processor.
    Type: Grant
    Filed: December 4, 2003
    Date of Patent: April 21, 2015
    Assignee: Broadcom Corporation
    Inventors: Mark L. Buer, Scott S. McDaniel
  • Publication number: 20140129664
    Abstract: Embodiments that provide one-shot remote direct memory access (RDMA) are provided. In one embodiment, a single command for a completion process of a remote direct memory access (RDMA) operation is received in a computing device. The computing device executes the completion process of the RDMA operation in response to the single command being received.
    Type: Application
    Filed: January 10, 2014
    Publication date: May 8, 2014
    Inventors: Scott S. McDaniel, Uri Elzur
  • Patent number: 8700724
    Abstract: Systems and methods that provide one-shot remote direct memory access (RDMA) are provided. In one embodiment, a system that transfers data over an RDMA network may include, for example, a host. The host may include, for example, a driver and a network interface card (NIC), the driver being coupled to the NIC. The driver and the NIC may perform a one-shot initiation process and/or a one-shot completion process of an RDMA operation.
    Type: Grant
    Filed: August 19, 2003
    Date of Patent: April 15, 2014
    Assignee: Broadcom Corporation
    Inventors: Scott S. McDaniel, Uri Elzur
  • Patent number: 8677010
    Abstract: Aspects of the invention may comprise receiving an incoming TCP packet at a TEEC and processing at least a portion of the incoming packet once by the TEEC without having to do any reassembly and/or retransmission by the TEEC. At least a portion of the incoming TCP packet may be buffered in at least one internal elastic buffer of the TEEC. The internal elastic buffer may comprise a receive internal elastic buffer and/or a transmit internal elastic buffer. Accordingly, at least a portion of the incoming TCP packet may be buffered in the receive internal elastic buffer. At least a portion of the processed incoming packet may be placed in a portion of a host memory for processing by a host processor or CPU. Furthermore, at least a portion of the processed incoming TCP packet may be DMA transferred to a portion of the host memory.
    Type: Grant
    Filed: May 25, 2011
    Date of Patent: March 18, 2014
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Patent number: 8631162
    Abstract: Systems and methods that network interface in a multiple network environment are provided. In one embodiment, the system includes, for example, a network connector, a processor, a peripheral component interface (PCI) bridge and a unified driver. The processor may be coupled to the network connector and to the PCI bridge. The processor may be adapted, for example, to process a plurality of different types of network traffic. The unified driver may be coupled to the PCI bridge and may be adapted to provide drivers associated with the plurality of different types of network traffic.
    Type: Grant
    Filed: August 29, 2003
    Date of Patent: January 14, 2014
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Patent number: 8549152
    Abstract: A network interface device may include an offload engine that receives control of state information while a particular connection is offloaded. Control of the state information for the particular connection may be split between the network interface device and a host. The at least one connection variables may be updated and provided to the host.
    Type: Grant
    Filed: June 10, 2010
    Date of Patent: October 1, 2013
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Patent number: 8417834
    Abstract: Systems and methods that provide a unified infrastructure over Ethernet are provided. In one embodiment, a method of communicating between an Ethernet-based system and a non-Ethernet-based network may include, for example, one or more of the following: generating an Ethernet frame that comprises a proxy payload, a proxy association header and an Ethernet header, the Ethernet header relating to a control proxy element; sending the Ethernet frame over an Ethernet-based network to the control proxy element; generating a non-Ethernet frame that comprises the proxy payload and a proxy header; and sending the non-Ethernet frame over a non-Ethernet-based network.
    Type: Grant
    Filed: December 8, 2004
    Date of Patent: April 9, 2013
    Assignee: Broadcom Corporation
    Inventors: Uri El Zur, Kan Frankie Fan, Scott S. McDaniel, Murali Rajagopal
  • Patent number: 8402142
    Abstract: A method for providing TCP/IP offload may include receiving control of at least a portion of Transmission Control Protocol (TCP) connection variables by a TCP/IP Offload Engine operatively coupled to a host. The at least a portion of the TCP/IP Offload Engine connection variables may be updated and provided to the host. The TCP/IP Offload Engine may receive control of segment-variant TCP connection variables. The TCP/IP Offload Engine may update the received TCP segment-variant TCP connection variables, and communicate the updated TCP segment-variant TCP connection variables to the host. A system for providing connection offload may include a TCP/IP Offload Engine that receives control of state information for a particular connection offloaded to a network interface card (NIC). Control of the state information for the particular connection may be split between the NIC and a host.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: March 19, 2013
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Patent number: 8230090
    Abstract: Systems and methods that provide transmission control protocol (TCP) offloading and uploading are provided. In one example, a multiple stack system may include a software stack and a hardware stack. The software stack may be adapted to process a first set of TCP packet streams. The hardware stack may be adapted to process a second set of TCP packet streams and may be coupled to the software stack. The software stack may be adapted to offload one or more TCP connections to the hardware stack. The hardware stack may be adapted to upload one or more TCP connections to the software stack. The software stack and the hardware stack may process one or more TCP connections concurrently.
    Type: Grant
    Filed: November 18, 2002
    Date of Patent: July 24, 2012
    Assignee: Broadcom Corporation
    Inventors: Kan Frankie Fan, Scott S. McDaniel
  • Patent number: 8098682
    Abstract: A network controller may split, via a pass-through driver, processing of transmit and/or receive network traffic handled by the network controller. Physical layer (PHY) processing and/or Medium Access Control (MAC) processing of the management traffic may be performed internally via the network controller. The pass-through driver may route at least a portion of management traffic carried via the transmit and/or receive network traffic externally to said network controller for processing. In this regard, the pass-through driver may enable routing of data and/or messages to enable performing the external processing of management traffic. An application processor may be used to perform the external processing of management traffic.
    Type: Grant
    Filed: January 21, 2010
    Date of Patent: January 17, 2012
    Assignee: Broadcom Corporation
    Inventors: Scott S. McDaniel, Steven B. Lindsay
  • Publication number: 20110314171
    Abstract: A method for processing of packetized data is disclosed and includes allocating a plurality of partitions of a single context memory for handling data for a corresponding plurality of network protocol connections. Data for at least one of the plurality of network protocol connections may be processed utilizing a corresponding at least one of the plurality of partitions of the single context memory. The at least one of the plurality of partitions of the single context memory may be de-allocated, when the corresponding at least one of the plurality of network protocol connections is terminated. The data for the at least one of the plurality of network protocol connections may be received. The data may be associated with a single network protocol or with a plurality of network protocols. The data for the at least one of the plurality of network protocol connections includes context data.
    Type: Application
    Filed: December 14, 2010
    Publication date: December 22, 2011
    Inventors: Uri El Zur, Steven B. Lindsay, Kan Frankie Fan, Scott S. McDaniel
  • Patent number: 8055895
    Abstract: Methods and associated systems provide secured data transmission over a data network. A security device provides security processing in the data path of a packet network. The device may include at least one network interface to send packets to and receive packets from a data network and at least one cryptographic engine for performing encryption, decryption and/or authentication operations. The device may be configured as an in-line security processor that processes packets that pass through the device as the packets are routed to/from the data network.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: November 8, 2011
    Assignee: Broadcom Corporation
    Inventors: Mark Buer, Scott S. McDaniel, Uri Elzur, Joseph J. Tardo, Kan Fan
  • Publication number: 20110246662
    Abstract: Aspects of the invention may comprise receiving an incoming TCP packet at a TEEC and processing at least a portion of the incoming packet once by the TEEC without having to do any reassembly and/or retransmission by the TEEC. At least a portion of the incoming TCP packet may be buffered in at least one internal elastic buffer of the TEEC. The internal elastic buffer may comprise a receive internal elastic buffer and/or a transmit internal elastic buffer. Accordingly, at least a portion of the incoming TCP packet may be buffered in the receive internal elastic buffer. At least a portion of the processed incoming packet may be placed in a portion of a host memory for processing by a host processor or CPU. Furthermore, at least a portion of the processed incoming TCP packet may be DMA transferred to a portion of the host memory.
    Type: Application
    Filed: May 25, 2011
    Publication date: October 6, 2011
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Patent number: 8010707
    Abstract: Systems and methods that network interface are provided. In one embodiment, a data center may be provided that may include, for example, a first tier, a second tier and a third tier. The first tier may include, for example, a first server. The second tier may include, for example, a second server. The third tier may include, for example, a third server. At least one of the first server, the second server and the third server may handle a plurality of different traffic types over a single fabric.
    Type: Grant
    Filed: August 29, 2003
    Date of Patent: August 30, 2011
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Publication number: 20110185076
    Abstract: Systems and methods for network interfacing may include a communication data center with a first tier, a second tier and a third tier. The first tier may include a first server with a first single integrated convergent network controller chip. The second server may include a second server with a second single integrated convergent network controller chip. The third tier may include a third server with a third single integrated convergent network controller chip. The second server may be coupled to the first server via a single fabric with a single connector. The third server may be coupled to the second server via the single fabric with the single connector. The respective first, second and third server, each processes a plurality of different traffic types concurrently via the respective first, second and third single integrated convergent network chip over the single fabric that is coupled to the single connector.
    Type: Application
    Filed: April 5, 2011
    Publication date: July 28, 2011
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel
  • Patent number: 7934021
    Abstract: Systems and methods for network interfacing may include a communication data center with a first tier, a second tier and a third tier. The first tier may include a first server with a first single integrated convergent network controller chip. The second server may include a second server with a second single integrated convergent network controller chip. The third tier may include a third server with a third single integrated convergent network controller chip. The second server may be coupled to the first server via a single fabric with a single connector. The third server may be coupled to the second server via the single fabric with the single connector. The respective first, second and third server, each processes a plurality of different traffic types concurrently via the respective first, second and third single integrated convergent network chip over the single fabric that is coupled to the single connector.
    Type: Grant
    Filed: June 8, 2009
    Date of Patent: April 26, 2011
    Assignee: Broadcom Corporation
    Inventors: Uri Elzur, Frankie Fan, Steven B. Lindsay, Scott S. McDaniel