Patents by Inventor Peter A. Walker

Peter A. Walker has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11952577
    Abstract: Provided herein compositions and methods for producing isoprenoids, including squalene. In certain aspects and embodiments provided are genetically converted yeast and uses therefore. In some aspects and embodiments, the genetically converted yeast produce isoprenoids, preferably squalene. Also are provided methods of producing squalene using a genetically converted yeast or a non-genetically converted yeast. The invention also provides squalene produced by genetically converted yeast or non-genetically converted yeast.
    Type: Grant
    Filed: January 9, 2017
    Date of Patent: April 9, 2024
    Assignee: NUCELIS LLC
    Inventors: Keith A. Walker, Mark E. Knuth, Noel M. Fong, Peter R. Beetham
  • Patent number: 11918255
    Abstract: A system includes a first pedicle screw, a second pedicle screw, and an adjustable rod having an outer housing coupled to one of the first pedicle screw and the second pedicle screw, the outer housing having a threaded shaft secured to one end thereof extending along an interior portion thereof. The system farther includes a hollow magnetic assembly disposed within the outer housing and having a magnetic element disposed therein, the hollow magnetic assembly having an internal threaded surface engaged with the threaded shaft, the magnetic assembly being coupled to the other of the first pedicle screw and the second pedicle screw, wherein the hollow magnetic assembly rotates in response to an externally applied magnetic field to thereby lengthen or shorten the distance between the first pedicle screw and the second pedicle screw.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: March 5, 2024
    Assignee: Nuvasive Specialized Orthopedics Inc.
    Inventors: Brad Culbert, Scott Pool, Blair Walker, Arvin Chang, Peter P. Tran
  • Patent number: 11836642
    Abstract: A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.
    Type: Grant
    Filed: December 23, 2022
    Date of Patent: December 5, 2023
    Assignee: Visa International Service Association
    Inventors: Yinhe Cheng, Yu Gu, Igor Karpenko, Peter Walker, Ranglin Lu, Subir Roy
  • Publication number: 20230342203
    Abstract: A method for dynamically assigning an inference request is disclosed. A method for dynamically assigning an inference request may include determining at least one model to process an inference request on a plurality of computing platforms, the plurality of computing platforms including at least one Central Processing Unit (CPU) and at least one Graphics Processing Unit (GPU), obtaining, with at least one processor, profile information of the at least one model, the profile information including measured characteristics of the at least one model, dynamically determining a selected computing platform from between the at least one CPU and the at least one GPU for responding to the inference request based on an optimized objective associated with a status of the computing platform and the profile information, and routing, with at least one processor, the inference request to the selected computing platform. A system and computer program product are also disclosed.
    Type: Application
    Filed: June 29, 2023
    Publication date: October 26, 2023
    Inventors: Hao Yang, Biswajit Das, Yu Gu, Peter Walker, Igor Karpenko, Robert Brian Christensen
  • Patent number: 11714681
    Abstract: A method for dynamically assigning an inference request is disclosed. A method for dynamically assigning an inference request may include determining at least one model to process an inference request on a plurality of computing platforms, the plurality of computing platforms including at least one Central Processing Unit (CPU) and at least one Graphics Processing Unit (GPU), obtaining, with at least one processor, profile information of the at least one model, the profile information including measured characteristics of the at least one model, dynamically determining a selected computing platform from between the at least one CPU and the at least one GPU for responding to the inference request based on an optimized objective associated with a status of the computing platform and the profile information, and routing, with at least one processor, the inference request to the selected computing platform. A system and computer program product are also disclosed.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: August 1, 2023
    Assignee: Visa International Service Association
    Inventors: Hao Yang, Biswajit Das, Yu Gu, Peter Walker, Igor Karpenko, Robert Brian Christensen
  • Publication number: 20230130887
    Abstract: A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.
    Type: Application
    Filed: December 23, 2022
    Publication date: April 27, 2023
    Inventors: Yinhe Cheng, Yu Gu, Igor Karpenko, Peter Walker, Ranglin Lu, Subir Roy
  • Patent number: 11633196
    Abstract: A minimally invasive hip arthroplasty technique involves intramedullary insertion of an elongate femoral broach into a femur. The broach has a superior lateromedial transverse bore. A reaming rod is then located through the transverse bore and the neck of the femur. A cutting head is coupled to a distal end of the reaming rod via an incision. An orthogonal drive arm of an arthroplasty jig may also be inserted behind the cutting head to press the cutting head to ream the acetabulum while the reaming rod rotates the cutting head.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: April 25, 2023
    Assignee: PMSW RESEARCH PTY LTD
    Inventor: Peter Walker
  • Patent number: 11562263
    Abstract: A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: January 24, 2023
    Assignee: Visa International Service Association
    Inventors: Yinhe Cheng, Yu Gu, Igor Karpenko, Peter Walker, Ranglin Lu, Subir Roy
  • Publication number: 20220343883
    Abstract: Disclosed are a method and device for processing an audio signal, in which a pitch processed signal 21 is mixed 33 with a high pass filtered 30 version of the input signal. This produces improvements in the latency and quality of the pitch processed signal, particularly for live performance.
    Type: Application
    Filed: September 25, 2019
    Publication date: October 27, 2022
    Inventor: Peter WALKER
  • Publication number: 20220293071
    Abstract: A vibrato control device for a guitar, including a body, a rotation mechanism within the body having an axis, and an arm connected to the body so as to allow rotation of the arm relative to the body about the axis. The arm is oriented so as to rotate generally parallel to the side of the body. The arm has particular application to a mechanically operated but electronically sensed vibrato control device.
    Type: Application
    Filed: July 13, 2020
    Publication date: September 15, 2022
    Inventor: Peter WALKER
  • Publication number: 20220273420
    Abstract: A surgical filament securement assembly has an anchor having a channel therethrough through which a surgical filament is passed. The surgical filament is tied to form a dilated knot having at least one throw around a dilater member at an entrance of the channel such that the surgical filament is able to run around the dilater member when a limb thereof is pulled in a first direction through the channel whilst the dilated knot remains in place. When the dilater member is pulled out from within the dilated knot, the dilated knot itself strangulates to form a tightened stopper knot. The channel has a diameter less than that of the tightened stopper knot such that the tightened stopper knot cannot pass through the entrance in the first direction.
    Type: Application
    Filed: August 21, 2020
    Publication date: September 1, 2022
    Applicant: PMSW RESEARCH PTY LTD
    Inventor: Peter Walker
  • Patent number: 11330885
    Abstract: The multi-functional hair drying net and turban towel for upright application may be used on its own or so combination with a hand held hair dryer. Useful for managing dry and wet hair, to absorb water from wet hair after washing, swimming, or anytime the hair is wet, thereby avoiding dripping of water on the face, neck, or clothes. Used over dry hair to keep hair off the face, back and neck when applying make-up and treatments to the face. When required to dry hair by electrical means the device can be connected to a commercially available hair dryer and used as a conduit to direct warm air into the net cavity and dry hair using electrical means. It can be used to keep hair dry and off the face and back in the bath, shower, Jacuzzi spa or steam bath.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: May 17, 2022
    Inventors: Aurora Walker, Andrew Peter Walker
  • Publication number: 20220061489
    Abstract: The multi-functional hair drying net and turban towel for upright application may be used on its own or so combination with a hand held hair dryer. Useful for managing dry and wet hair, to absorb water from wet hair after washing, swimming, or anytime the hair is wet, thereby avoiding dripping of water on the face, neck, or clothes. Used over dry hair to keep hair off the face, back and neck when applying make-up and treatments to the face. When required to dry hair by electrical means the device can be connected to a commercially available hair dryer and used as a conduit to direct warm air into the net cavity and dry hair using electrical means. It can be used to keep hair dry and off the face and back in the bath, shower, Jacuzzi spa or steam bath.
    Type: Application
    Filed: May 23, 2016
    Publication date: March 3, 2022
    Inventors: Aurora Walker, Andrew Peter Walker
  • Publication number: 20220051254
    Abstract: Embodiments of the invention are directed to systems and methods for utilizing a cache to store historical transaction data. A predictive model may be trained to identify particular identifiers associated with historical data that is likely to be utilized on a particular date and/or within a particular time period. The historical data corresponding to these identifiers may be stored in a cache of the processing computer. Subsequently, an authorization request message may be received that includes an identifier. The processing computer may utilize the identifier to retrieve historical transaction data from the cache. The retrieved data may be utilized to perform any suitable operation. By predicting the data that will be needed to perform these operations, and preemptively store such data in a cache, the latency associated with subsequent processing may be reduced and the performance of the system as a whole improved.
    Type: Application
    Filed: October 26, 2021
    Publication date: February 17, 2022
    Inventors: Hongqin Song, Yu Gu, Dan Wang, Peter Walker
  • Patent number: 11176556
    Abstract: Embodiments of the invention are directed to systems and methods for utilizing a cache to store historical transaction data. A predictive model may be trained to identify particular identifiers associated with historical data that is likely to be utilized on a particular date and/or within a particular time period. The historical data corresponding to these identifiers may be stored in a cache of the processing computer. Subsequently, an authorization request message may be received that includes an identifier. The processing computer may utilize the identifier to retrieve historical transaction data from the cache. The retrieved data may be utilized to perform any suitable operation. By predicting the data that will be needed to perform these operations, and preemptively store such data in a cache, the latency associated with subsequent processing may be reduced and the performance of the system as a whole improved.
    Type: Grant
    Filed: November 13, 2018
    Date of Patent: November 16, 2021
    Assignee: Visa International Service Association
    Inventors: Hongqin Song, Yu Gu, Dan Wang, Peter Walker
  • Patent number: 11151385
    Abstract: A method for (of) detecting deception in an Audio-Video response of a user, using a server, in a distributed computing architecture, characterized in that the method including: enabling an Audio-Video connection with a user device upon receiving a request from a user; obtaining, from the user device, an Audio-Video response of the user corresponding to a first set of questions that are provided to the user by the server; extracting audio signals and video signals from the Audio-Video response; detecting an activity of the user by determining a plurality of Natural Language Processing (NLP) features from the extracted audio signals by (i) performing a speech to text translation and (ii) extracting the plurality of NLP features from the translated text, and determining a plurality of speech features from the extracted audio signals by (i) splitting the extracted audio signals into a plurality of short interval audio signals and (ii) extracting the plurality of speech features from the plurality of short interva
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: October 19, 2021
    Assignee: RTScaleAI Inc
    Inventors: Vivek Iyer, Peter Walker
  • Publication number: 20210232399
    Abstract: A method for dynamically assigning an inference request is disclosed. A method for dynamically assigning an inference request may include determining at least one model to process an inference request on a plurality of computing platforms, the plurality of computing platforms including at least one Central Processing Unit (CPU) and at least one Graphics Processing Unit (GPU), obtaining, with at least one processor, profile information of the at least one model, the profile information including measured characteristics of the at least one model, dynamically determining a selected computing platform from between the at least one CPU and the at least one GPU for responding to the inference request based on an optimized objective associated with a status of the computing platform and the profile information, and routing, with at least one processor, the inference request to the selected computing platform. A system and computer program product are also disclosed.
    Type: Application
    Filed: January 23, 2020
    Publication date: July 29, 2021
    Inventors: Hao Yang, Biswajit Das, Yu Gu, Peter Walker, Igor Karpenko, Robert Brian Christensen
  • Publication number: 20210224665
    Abstract: A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.
    Type: Application
    Filed: January 17, 2020
    Publication date: July 22, 2021
    Inventors: Yinhe Cheng, Yu Gu, Igor Karpenko, Peter Walker, Ranglin Lu, Subir Roy
  • Publication number: 20210192221
    Abstract: A method for (of) detecting deception in an Audio-Video response of a user, using a server, in a distributed computing architecture, characterized in that the method including: enabling an Audio-Video connection with a user device upon receiving a request from a user; obtaining, from the user device, an Audio-Video response of the user corresponding to a first set of questions that are provided to the user by the server; extracting audio signals and video signals from the Audio-Video response; detecting an activity of the user by determining a plurality of Natural Language Processing (NLP) features from the extracted audio signals by (i) performing a speech to text translation and (ii) extracting the plurality of NLP features from the translated text, and determining a plurality of speech features from the extracted audio signals by (i) splitting the extracted audio signals into a plurality of short interval audio signals and (ii) extracting the plurality of speech features from the plurality of short interva
    Type: Application
    Filed: December 20, 2019
    Publication date: June 24, 2021
    Inventors: Vivek Iyer, Peter Walker
  • Patent number: D957735
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: July 12, 2022
    Inventors: Aurora Walker, Andrew Peter Walker