Patents by Inventor Michael Zacharski
Michael Zacharski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220253906Abstract: Dynamic header bidding configuration is disclosed. For example, ad slot entries associated with ad slots in a web content, further associated with ad identifiers and ad sizes are received. Header bidding partners associated with an ad slot entry are received, each partner associated with a parameter. The partners, parameters, ad slot entries, ad identifiers, and ad sizes are recorded as a configuration associated with the web content. A script associated with the configuration, further associated with a page of the web content including an ad slot associated with the ad slot entry is generated. The configuration is sent to a client device that invokes the script by loading the first page, and an ad from a partner is displayed in an ad impression of the ad slot on the client device based on a response to a notice sent to at least two partners.Type: ApplicationFiled: February 14, 2022Publication date: August 11, 2022Applicant: Engine Media, LLCInventors: Michael ZACHARSKI, Alex E. COOK
-
Patent number: 11392995Abstract: Efficient translation and load balancing of bid requests is disclosed. For example, a first network interface receives a notice from a publisher and triggering a first interrupt on a first processor. The first processor processes the first interrupt and provides the notice to a notice queue. A request translator executing on a distinct second processor translates the notice into a request. A request router sends the request to an advertiser through a selected network interface, which receives a first response triggering a second interrupt on a third processor. The second processor processes the second interrupt and provides the first response to a response queue. A response translator executing on the second processor translates the first response into an offer, which is sent to the publisher through the first network interface. Meanwhile, a second network interface triggers a third interrupt on a fourth processor after receiving a second response.Type: GrantFiled: July 22, 2019Date of Patent: July 19, 2022Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, Michael Zacharski
-
Patent number: 11250476Abstract: Dynamic header bidding configuration is disclosed. For example, ad slot entries associated with ad slots in a web content, further associated with ad identifiers and ad sizes are received. Header bidding partners associated with an ad slot entry are received, each partner associated with a parameter. The partners, parameters, ad slot entries, ad identifiers, and ad sizes are recorded as a configuration associated with the web content. A script associated with the configuration, further associated with a page of the web content including an ad slot associated with the ad slot entry is generated. The configuration is sent to a client device that invokes the script by loading the first page, and an ad from a partner is displayed in an ad impression of the ad slot on the client device based on a response to a notice sent to at least two partners.Type: GrantFiled: August 4, 2017Date of Patent: February 15, 2022Assignee: ENGINE MEDIA, LLCInventors: Michael Zacharski, Alex E. Cook
-
Patent number: 10999201Abstract: Dynamic advertisement routing is disclosed. For example, a plurality of internet protocol (“IP”) addresses associated with respective plurality of target nodes is stored in a routing pool. Each IP address in the routing pool is pinged through each of first and second load balancer network interfaces. Network routes associated with target nodes are updated based on a first plurality of ping responses. Communications sessions are established with target nodes through respective network routes. IP addresses are pinged and respective latencies in a latency cache are updated based on a second plurality of ping responses. A first request directed to the plurality of target nodes is received and is determined to be sent to a first target node based on the latency cache forwarded to the first target node via the first network route.Type: GrantFiled: June 1, 2018Date of Patent: May 4, 2021Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, John Patrick Roach, Michael Zacharski
-
Patent number: 10554739Abstract: Individualized connectivity based request handling is disclosed. For example, a content source is accessed by a client device and a load balancer executes on a processor to receive a first request based on the client device accessing the content source. A first session variable is set to a first value in a first session and a first latency to the client device is measured. A first plurality of target nodes is selected based on the first session variable. A first plurality of messages is sent to the first plurality of target nodes. A second request is received from the client device after the first session expires, starting a second session. The first session variable is set to a different second value in the second session. A second plurality of messages is sent to a second plurality of target nodes different from the first plurality of target nodes.Type: GrantFiled: July 19, 2019Date of Patent: February 4, 2020Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, Michael Adam Grosinger, John Patrick Roach, Mickey Alexander Schwab, Michael Zacharski
-
Publication number: 20190340657Abstract: Efficient translation and load balancing of bid requests is disclosed. For example, a first network interface receives a notice from a publisher and triggering a first interrupt on a first processor. The first processor processes the first interrupt and provides the notice to a notice queue. A request translator executing on a distinct second processor translates the notice into a request. A request router sends the request to an advertiser through a selected network interface, which receives a first response triggering a second interrupt on a third processor. The second processor processes the second interrupt and provides the first response to a response queue. A response translator executing on the second processor translates the first response into an offer, which is sent to the publisher through the first network interface. Meanwhile, a second network interface triggers a third interrupt on a fourth processor after receiving a second response.Type: ApplicationFiled: July 22, 2019Publication date: November 7, 2019Applicant: Engine Media, LLCInventors: Louis Clayton Ashner, Michael Zacharski
-
Publication number: 20190342377Abstract: Individualized connectivity based request handling is disclosed. For example, a content source is accessed by a client device and a load balancer executes on a processor to receive a first request based on the client device accessing the content source. A first session variable is set to a first value in a first session and a first latency to the client device is measured. A first plurality of target nodes is selected based on the first session variable. A first plurality of messages is sent to the first plurality of target nodes. A second request is received from the client device after the first session expires, starting a second session. The first session variable is set to a different second value in the second session. A second plurality of messages is sent to a second plurality of target nodes different from the first plurality of target nodes.Type: ApplicationFiled: July 19, 2019Publication date: November 7, 2019Applicant: Engine Media, LLCInventors: Louis Clayton Ashner, Michael Adam Grosinger, John Patrick Roach, Mickey Alexander Schwab, Michael Zacharski
-
Patent number: 10455008Abstract: Individualized connectivity based request handling is disclosed. For example, a content source is accessed by a client device and a load balancer executes on a processor to receive a first request based on the client device accessing the content source. A first session variable is set to a first value in a first session and a first latency to the client device is measured. A first plurality of target nodes is selected based on the first session variable. A first plurality of messages is sent to the first plurality of target nodes. A second request is received from the client device after the first session expires, starting a second session. The first session variable is set to a different second value in the second session. A second plurality of messages is sent to a second plurality of target nodes different from the first plurality of target nodes.Type: GrantFiled: August 13, 2018Date of Patent: October 22, 2019Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, Michael Zacharski, Michael Adam Grosinger, Mickey Alexander Schwab, John Patrick Roach
-
Patent number: 10432737Abstract: Geopartitioned data caching is disclosed. For example, a data source is connected over a network to a geographically remote data cache in communication with a load balancer service. A processor on the data cache executes to receive, from the data source, a plurality of data entries in the data cache, where the plurality of data entries is selected based on a geographical region of the data cache. A data request for a data entry of the plurality of data entries is received from the load balancer service, where a requestor of the data request is in a second geographical region proximately located with the data cache. The data entry is sent to the load balancer service, where the load balancer service forwards the data entry to a receiver.Type: GrantFiled: October 12, 2017Date of Patent: October 1, 2019Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, John Patrick Roach, Mickey Alexander Schwab, Michael Zacharski
-
Patent number: 10432706Abstract: Low-latency high-throughput scalable data caching is disclosed. For example, a data source is connected over a network to a load balancer server with data cache. A load balancer service and a data cache service execute on processors on the first load balancer server to receive, by the load balancer service, a request from a client device over the network. The load balancer service requests a data entry associated with the request from the first data cache service. The data cache service retrieves the first data entry from the first data cache, which stores a first plurality of data entries that is a subset of a second plurality of data entries stored in the data source. The load balancer service modifies the request with the data entry. The load balancer service sends a modified request to a plurality of receivers.Type: GrantFiled: August 24, 2018Date of Patent: October 1, 2019Assignee: ENGINE MEDIA LLCInventors: Louis Clayton Ashner, Mickey Alexander Schwab, Michael Zacharski, John Patrick Roach
-
Patent number: 10360598Abstract: Efficient translation and load balancing of bid requests is disclosed. For example, a first network interface receives a notice from a publisher and triggering a first interrupt on a first processor. The first processor processes the first interrupt and provides the notice to a notice queue. A request translator executing on a distinct second processor translates the notice into a request. A request router sends the request to an advertiser through a selected network interface, which receives a first response triggering a second interrupt on a third processor. The second processor processes the second interrupt and provides the first response to a response queue. A response translator executing on the second processor translates the first response into an offer, which is sent to the publisher through the first network interface. Meanwhile, a second network interface triggers a third interrupt on a fourth processor after receiving a second response.Type: GrantFiled: April 12, 2017Date of Patent: July 23, 2019Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, Michael Zacharski
-
Publication number: 20190199784Abstract: Low-latency high-throughput scalable data caching is disclosed. For example, a data source is connected over a network to a load balancer server with data cache. A load balancer service and a data cache service execute on processors on the first load balancer server to receive, by the load balancer service, a request from a client device over the network. The load balancer service requests a data entry associated with the request from the first data cache service. The data cache service retrieves the first data entry from the first data cache, which stores a first plurality of data entries that is a subset of a second plurality of data entries stored in the data source. The load balancer service modifies the request with the data entry. The load balancer service sends a modified request to a plurality of receivers.Type: ApplicationFiled: August 24, 2018Publication date: June 27, 2019Applicant: Engine Media, LLCInventors: Louis Clayton Ashner, Mickey Alexander Schwab, Michael Zacharski, John Patrick Roach
-
Publication number: 20190141117Abstract: Individualized connectivity based request handling is disclosed. For example, a content source is accessed by a client device and a load balancer executes on a processor to receive a first request based on the client device accessing the content source. A first session variable is set to a first value in a first session and a first latency to the client device is measured. A first plurality of target nodes is selected based on the first session variable. A first plurality of messages is sent to the first plurality of target nodes. A second request is received from the client device after the first session expires, starting a second session. The first session variable is set to a different second value in the second session. A second plurality of messages is sent to a second plurality of target nodes different from the first plurality of target nodes.Type: ApplicationFiled: August 13, 2018Publication date: May 9, 2019Applicant: Engine Media, LLCInventors: Louis Clayton Ashner, Michael Zacharski, Michael Adam Grosinger, Mickey Alexander Schwab, John Patrick Roach
-
Publication number: 20190116230Abstract: Geopartitioned data caching is disclosed. For example, a data source is connected over a network to a geographically remote data cache in communication with a load balancer service. A processor on the data cache executes to receive, from the data source, a plurality of data entries in the data cache, where the plurality of data entries is selected based on a geographical region of the data cache. A data request for a data entry of the plurality of data entries is received from the load balancer service, where a requestor of the data request is in a second geographical region proximately located with the data cache. The data entry is sent to the load balancer service, where the load balancer service forwards the data entry to a receiver.Type: ApplicationFiled: October 12, 2017Publication date: April 18, 2019Applicant: Engine Media, LLCInventors: Mickey Alexander Schwab, Louis Clayton Ashner, Michael Zacharski, John Patrick Roach
-
Publication number: 20190043092Abstract: Dynamic header bidding configuration is disclosed. For example, ad slot entries associated with ad slots in a web content, further associated with ad identifiers and ad sizes are received. Header bidding partners associated with an ad slot entry are received, each partner associated with a parameter. The partners, parameters, ad slot entries, ad identifiers, and ad sizes are recorded as a configuration associated with the web content. A script associated with the configuration, further associated with a page of the web content including an ad slot associated with the ad slot entry is generated. The configuration is sent to a client device that invokes the script by loading the first page, and an ad from a partner is displayed in an ad impression of the ad slot on the client device based on a response to a notice sent to at least two partners.Type: ApplicationFiled: August 4, 2017Publication date: February 7, 2019Inventors: Michael ZACHARSKI, Alex E. COOK
-
Publication number: 20180300766Abstract: Efficient translation and load balancing of bid requests is disclosed. For example, a first network interface receives a notice from a publisher and triggering a first interrupt on a first processor. The first processor processes the first interrupt and provides the notice to a notice queue. A request translator executing on a distinct second processor translates the notice into a request. A request router sends the request to an advertiser through a selected network interface, which receives a first response triggering a second interrupt on a third processor. The second processor processes the second interrupt and provides the first response to a response queue. A response translator executing on the second processor translates the first response into an offer, which is sent to the publisher through the first network interface. Meanwhile, a second network interface triggers a third interrupt on a fourth processor after receiving a second response.Type: ApplicationFiled: April 12, 2017Publication date: October 18, 2018Inventors: Louis Clayton Ashner, Michael Zacharski
-
Publication number: 20180278532Abstract: Dynamic advertisement routing is disclosed. For example, a plurality of internet protocol (“IP”) addresses associated with respective plurality of target nodes is stored in a routing pool. Each IP address in the routing pool is pinged through each of first and second load balancer network interfaces. Network routes associated with target nodes are updated based on a first plurality of ping responses. Communications sessions are established with target nodes through respective network routes. IP addresses are pinged and respective latencies in a latency cache are updated based on a second plurality of ping responses. A first request directed to the plurality of target nodes is received and is determined to be sent to a first target node based on the latency cache forwarded to the first target node via the first network route.Type: ApplicationFiled: June 1, 2018Publication date: September 27, 2018Applicant: Engine Media, LLCInventors: Louis Clayton Ashner, John Patrick Roach, Michael Zacharski
-
Patent number: 10063632Abstract: Low-latency high-throughput scalable data caching is disclosed. For example, a data source is connected over a network to a load balancer server with data cache. A load balancer service and a data cache service execute on processors on the first load balancer server to receive, by the load balancer service, a request from a client device over the network. The load balancer service requests a data entry associated with the request from the first data cache service. The data cache service retrieves the first data entry from the first data cache, which stores a first plurality of data entries that is a subset of a second plurality of data entries stored in the data source. The load balancer service modifies the request with the data entry. The load balancer service sends a modified request to a plurality of receivers.Type: GrantFiled: December 22, 2017Date of Patent: August 28, 2018Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, Mickey Alexander Schwab, Michael Zacharski, John Patrick Roach
-
Patent number: 10051046Abstract: Individualized connectivity based request handling is disclosed. For example, a content source is accessed by a client device and a load balancer executes on a processor to receive a first request based on the client device accessing the content source. A first session variable is set to a first value in a first session and a first latency to the client device is measured. A first plurality of target nodes is selected based on the first session variable. A first plurality of messages is sent to the first plurality of target nodes. A second request is received from the client device after the first session expires, starting a second session. The first session variable is set to a different second value in the second session. A second plurality of messages is sent to a second plurality of target nodes different from the first plurality of target nodes.Type: GrantFiled: November 8, 2017Date of Patent: August 14, 2018Assignee: ENGINE MEDIA, LLCInventors: Michael Zacharski, Michael Adam Grosinger, Louis Clayton Ashner, Mickey Alexander Schwab, John Patrick Roach
-
Patent number: 9992121Abstract: Dynamic advertisement routing is disclosed. For example, a plurality of internet protocol (“IP”) addresses associated with respective plurality of target nodes is stored in a routing pool. Each IP address in the routing pool is pinged through each of first and second load balancer network interfaces. Network routes associated with target nodes are updated based on a first plurality of ping responses. Communications sessions are established with target nodes through respective network routes. IP addresses are pinged and respective latencies in a latency cache are updated based on a second plurality of ping responses. A first request directed to the plurality of target nodes is received and is determined to be sent to a first target node based on the latency cache forwarded to the first target node via the first network route.Type: GrantFiled: November 16, 2017Date of Patent: June 5, 2018Assignee: ENGINE MEDIA, LLCInventors: Louis Clayton Ashner, John Patrick Roach, Michael Zacharski