Patents by Inventor James A. Walsh
James A. Walsh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200374362Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: ApplicationFiled: July 20, 2020Publication date: November 26, 2020Inventors: James Walsh, Suhail Khaki
-
Patent number: 10841392Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: GrantFiled: June 27, 2018Date of Patent: November 17, 2020Assignee: PEARSON MANAGEMENT SERVICES LIMITEDInventors: James Walsh, Suhail Khaki
-
Publication number: 20200351369Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: ApplicationFiled: July 21, 2020Publication date: November 5, 2020Inventors: James Walsh, Suhail Khaki
-
Publication number: 20200351368Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: ApplicationFiled: July 20, 2020Publication date: November 5, 2020Inventors: James Walsh, Suhail Khaki
-
Publication number: 20200329115Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: ApplicationFiled: June 24, 2020Publication date: October 15, 2020Inventors: James Walsh, Suhail Khaki
-
Publication number: 20200314198Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: ApplicationFiled: June 12, 2020Publication date: October 1, 2020Inventors: James Walsh, Suhail Khaki
-
Publication number: 20200307019Abstract: Kneading elements, extrusion apparatus, and methods of manufacturing honeycomb bodies are described herein. A kneading element (1802) has an inner surface (1804) defining an opening (1806) configured to couple the kneading element (1802) to a shaft (46,48). The kneading element (1802) also has a continuous closed curve elliptical outer surface (1808). The opening (1806) has an axis (1814) that is off-center with respect to a geometric center (1816) of the kneading element (1802) as viewed in a transverse plane perpendicular to the axis.Type: ApplicationFiled: October 3, 2018Publication date: October 1, 2020Inventors: Conor James Walsh, Stephanie Stoughton Wu
-
Publication number: 20200287984Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: ApplicationFiled: March 23, 2020Publication date: September 10, 2020Inventors: James Walsh, Suhail Khaki
-
Patent number: 10766712Abstract: A method and apparatus are provided for sorting items to a plurality of sort destinations. The items are fed into the apparatus at an input station having a scanning station. The scanning station evaluates one or more characteristics of each item. The items are then loaded onto one of a plurality of independently controlled delivery vehicles. The delivery vehicles are individually driven to sort destinations. Once at the appropriate sort destination, the delivery vehicle ejects the item to the sort destination and returns to receive another item to be delivered. A re-induction conveyor may be provided for receiving select items from the vehicles and conveying the items back to the input station for re-processing. Additionally, a controller is provided to control the movement of the vehicles based on a characteristic of each item being delivered by each vehicle. The system may also include vehicles having an assembly for detecting items being loaded onto or discharged from the vehicles.Type: GrantFiled: January 20, 2020Date of Patent: September 8, 2020Assignee: OPEX CorporationInventors: Robert R. DeWitt, Alexander Stevens, Monty McVaugh, James Walsh, Gregory Wilson
-
Publication number: 20200276745Abstract: Screw elements, extrusion apparatus, and methods of manufacturing honeycomb bodies are described herein. A segment for a ceramic batch screw extruder machine has at least one pump and mix screw element. The pump and mix screw element has a pitch, a diameter, and a pitch to diameter ratio of 0.8 to 2.6.Type: ApplicationFiled: October 3, 2018Publication date: September 3, 2020Applicant: CORNING INCORPORATEDInventors: David Robertson Treacy, Jr., Conor James Walsh, Stephanie Stoughton Wu
-
Publication number: 20200234863Abstract: Theoretical and practical constraints disallow direct determination of the structure of the atomic nucleus. Contained herein is a magnet model of the atomic nucleus, derived from considerations of charge density, RMS charge radii, magnetic moments, and nucleon binding energy. These physical properties point to a sequential, alternating up and down quark structure modeled in the present invention by an array of magnets alternating in polarity. The summation of the pull forces of the two magnet poles is unequal, and when two such magnet arrays are placed opposite one another in magnetic potential energy barrier assembly, the two arrays repel at a distance and attract when near one another. In one embodiment, the ratio of the maximum attractive force to the maximum repulsive force very closely approximates the strong force constant 137. This invention serves as a demonstration of the Coulomb barrier for the student, and a potentially useful model for probing the forces and structure of the atomic nucleus.Type: ApplicationFiled: March 23, 2020Publication date: July 23, 2020Inventor: Raymond James Walsh
-
Publication number: 20200225879Abstract: The present disclosure generally relates to limiting bandwidth in storage devices. One or more bandwidth quality of services levels may be selected and associated with commands according to service level agreements, which may prioritize some commands over others. A storage device fetches and executes one or more the commands. Each of the commands is associated with a bandwidth quality of service level. After executing the commands and transferring the data to a host device, the storage device may delay writing a completion entry corresponding to the executed commands to a completion queue based on the associated bandwidth quality of service level of the commands. The device may then delay revealing the completion entry by delaying updating a completion queue head pointer. The device may further delay sending an interrupt signal to the host device based on the associated bandwidth quality of service level of the commands.Type: ApplicationFiled: March 27, 2020Publication date: July 16, 2020Inventors: Daniel L. HELMICK, James WALSH
-
Publication number: 20200151134Abstract: The present disclosure generally relates to limiting bandwidth in storage devices. One or more bandwidth quality of services levels may be selected and associated with commands according to service level agreements, which may prioritize some commands over others. A storage device fetches and executes one or more the commands. Each of the commands is associated with a bandwidth quality of service level. After executing the commands and transferring the data to a host device, the storage device may delay writing a completion entry corresponding to the executed commands to a completion queue based on the associated bandwidth quality of service level of the commands. The device may then delay revealing the completion entry by delaying updating a completion queue head pointer. The device may further delay sending an interrupt signal to the host device based on the associated bandwidth quality of service level of the commands.Type: ApplicationFiled: November 13, 2018Publication date: May 14, 2020Inventors: Daniel HELMICK, James WALSH
-
Publication number: 20200152259Abstract: A method and apparatus are provided for sorting items to a plurality of sort destinations. The items are fed into the apparatus at an input station having a scanning station. The scanning station evaluates one or more characteristics of each item. The items are then loaded onto one of a plurality of independently controlled delivery vehicles. The delivery vehicles are individually driven to sort destinations. Once at the appropriate sort destination, the delivery vehicle ejects the item to the sort destination and returns to receive another item to be delivered. A re-induction conveyor may be provided for receiving select items from the vehicles and conveying the items back to the input station for re-processing. Additionally, a controller is provided to control the movement of the vehicles based on a characteristic of each item being delivered by each vehicle. The system may also include vehicles having an assembly for detecting items being loaded onto or discharged from the vehicles.Type: ApplicationFiled: January 20, 2020Publication date: May 14, 2020Inventors: Robert R. DeWitt, Alexander Stevens, Monty McVaugh, James Walsh, Gregory Wilson
-
Patent number: 10635355Abstract: The present disclosure generally relates to limiting bandwidth in storage devices. One or more bandwidth quality of services levels may be selected and associated with commands according to service level agreements, which may prioritize some commands over others. A storage device fetches and executes one or more the commands. Each of the commands is associated with a bandwidth quality of service level. After executing the commands and transferring the data to a host device, the storage device may delay writing a completion entry corresponding to the executed commands to a completion queue based on the associated bandwidth quality of service level of the commands. The device may then delay revealing the completion entry by delaying updating a completion queue head pointer. The device may further delay sending an interrupt signal to the host device based on the associated bandwidth quality of service level of the commands.Type: GrantFiled: November 13, 2018Date of Patent: April 28, 2020Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Daniel Helmick, James Walsh
-
Patent number: 10596721Abstract: An apparatus and method of manufacturing a porous ceramic segmented honeycomb body (340,340?) comprising axial channels (216) extending from a first end face (220) to a second end face (224). A plurality of porous ceramic honeycomb segments (204) is moved axially past respective apertures (110) of an adhesive applying device (100). Adhesive (118) is applied through openings (126) in the adhesive applying device (100) onto peripheral axial surfaces of each porous ceramic honeycomb segment (204). The plurality of porous ceramic honeycomb segments (204) enters a wide opening (318) of a tapered chamber (314) and exits a narrow opening (322) of the tapered chamber (314); a tapered wall (326) from the wide opening (318) to the narrow opening (322) presses the plurality of porous ceramic honeycomb segments (204) together forming the porous ceramic segmented honeycomb body (340,340?).Type: GrantFiled: November 20, 2015Date of Patent: March 24, 2020Assignee: Corning IncorporatedInventors: Keith Norman Bubb, Conor James Walsh
-
Patent number: 10601942Abstract: A global architecture (GLP), as disclosed herein, is based on the thin server architectural pattern; it delivers all its services in the form of web services and there are no user interface components executed on the GLP. Each web service exposed by the GLP is stateless, which allows the GLP to be highly scalable. The GLP is further decomposed into components. Each component is a microservice, making the overall architecture fully decoupled. Each microservice has fail-over nodes and can scale up on demand. This means the GLP has no single point of failure, making the platform both highly scalable and available. The GLP architecture provides the capability to build and deploy a microservice instance for each course-recipient-user combination. Because each student interacts with their own microservice, this makes the GLP scale up to the limit of cloud resources available—i.e. near infinity.Type: GrantFiled: June 27, 2018Date of Patent: March 24, 2020Assignee: PEARSON MANAGEMENT SERVICES LIMITEDInventors: James Walsh, Suhail Khaki
-
Patent number: 10581115Abstract: Electrolyte for a solid-state battery includes a body having grains of inorganic material sintered to one another, where the grains include lithium. The body is thin, has little porosity by volume, and has high ionic conductivity.Type: GrantFiled: March 7, 2019Date of Patent: March 3, 2020Assignee: CORNING INCORPORATEDInventors: Michael Edward Badding, Jacqueline Leslie Brown, Jennifer Anella Heine, Thomas Dale Ketcham, Gary Edward Merz, Eric Lee Miller, Zhen Song, Cameron Wayne Tanner, Conor James Walsh
-
Publication number: 20200062512Abstract: A method and apparatus are provided for sorting items to a plurality of sort destinations. The items are fed into the apparatus at an input station having a scanning station. The scanning station evaluates one or more characteristics of each item. The items are then loaded onto one of a plurality of independently controlled delivery vehicles. The delivery vehicles are individually driven to sort destinations. Once at the appropriate sort destination, the delivery vehicle ejects the item to the sort destination and returns to receive another item to be delivered. A re-induction conveyor may be provided for receiving select items from the vehicles and conveying the items back to the input station for re-processing. Additionally, a controller is provided to control the movement of the vehicles based on a characteristic each item being delivered by each vehicle. The system may also include vehicles having an assembly for detecting items being loaded onto or discharged from the vehicles.Type: ApplicationFiled: October 30, 2019Publication date: February 27, 2020Inventors: Robert R. DeWitt, Alexander Stevens, Monty McVaugh, James Walsh, Gregory Wilson
-
Patent number: 10564857Abstract: Systems and methods for quality of service (QoS) using adaptive command fetching are disclosed. NVM Express (NVMe) implements a paired submission queue and completion queue mechanism, with host software on the host device placing commands into the submission queue. The memory device processes the commands through various phases including fetching, processing, posting a completion message, and sending an interrupt to the host. NVMe also includes an NVMe virtualization environment, which uses a subsystem with multiple controllers to provide virtual or physical hosts direct I/O access. QoS may be used so that the NVMe processes in the virtualization environment receive sufficient resources. In particular, bandwidth assigned to a submission queue may be considered when processing of commands (such as fetching of commands). In the event that the bandwidth assigned to the submission queue is exceeded, the processing of the commands (such as the fetching of the commands) may be delayed.Type: GrantFiled: November 13, 2017Date of Patent: February 18, 2020Assignee: Western Digital Technologies, Inc.Inventors: Shay Benisty, James Walsh, Rajesh Koul