Abstract: Aspects of the disclosure generally relate to computing devices and/or systems, and may be generally directed to devices, systems, methods, and/or applications for learning operation of an application or an object of an application in various visual surroundings, storing this knowledge in a knowledgebase (i.e. neural network, graph, sequences, etc.), and enabling autonomous operation of the application or the object of the application.
Abstract: Techniques related to temporary setpoint values are disclosed. The techniques may involve causing operation of a fluid delivery device in a closed-loop mode for automatically delivering fluid based on a difference between a first setpoint value and an analyte concentration value during operation of the fluid delivery device in the closed-loop mode. Additionally, the techniques may involve obtaining a second setpoint value. The second setpoint value may be a temporary setpoint value to be used for a period of time to regulate fluid delivery, and the second setpoint value may be greater than the first setpoint value. The techniques may further involve causing operation of the fluid delivery device for automatically reducing fluid delivery for the period of time based on the second setpoint value.
Type:
Grant
Filed:
April 17, 2024
Date of Patent:
April 8, 2025
Assignee:
MEDTRONIC MINIMED, INC.
Inventors:
Benyamin Grosman, Di Wu, Anirban Roy, Neha J. Parikh
Abstract: A system could include persistent storage containing application components. A plurality of software applications could be installed on the system. The software applications could be respectively associated context records that include references to application components that provide some behavior or data for the software applications. The system could also include processors configured to perform operations. The operations could include receiving a request to generate a topology map for a software application and identifying, based on a context record for the software application, a subset of application components that provide some behavior or data for the software application. The operations could further include determining relationship types between pairs of application components and generating a topology map for the software application.
Type:
Grant
Filed:
July 14, 2023
Date of Patent:
April 8, 2025
Assignee:
ServiceNow, Inc.
Inventors:
Jacob Burman, Michel Abou Samah, Kylin Follenweider, Sharon Elizabeth Carmichael Ehlert
Abstract: A chassis front-end is disclosed. The chassis front-end may include a switchboard including an Ethernet switch, a Baseboard Management Controller, and a mid-plane connector. The chassis front-end may also include a mid-plane including at least one storage device connector and a speed logic to inform at least one storage device of an Ethernet speed of the chassis front-end. The Ethernet speeds may vary.
Abstract: Drift is automatically detected in configuration of services running in a management appliance of a software-defined data center. A method of automatically detecting drift includes: in response to a notification of a change in a configuration of a first service enabled for proactive drift detection, transmitting a first request to compute drift in the configuration of the first service to a plug-in of the first service, the first request including the change in the configuration of the first service; periodically, at designated time intervals, transmitting a second request to compute drift in the configuration of a second service enabled for passive drift detection, to the plug-in of the second service, the second request including a current state of the configuration of the second service; and notifying a desired state management service of the computed drift in the configuration of the first and second services.
Abstract: A computing device receives data related to operation of a cloud computing environment having an application comprising several services. The data related to operation of the cloud computing environment can include time-based data related to computing resource use in the cloud computing environment, such as I/O rate, processor utilization, and others. In some implementations the services that compose the application can be orchestrated through an orchestrator, and in those implementations data regarding the orchestration can also be provided to the computing device. The computing device can also request service-related information from the cloud computing environment, where the service-related information can include financial related information for operations in the cloud.
Type:
Grant
Filed:
September 8, 2022
Date of Patent:
March 25, 2025
Assignee:
Red Hat, Inc.
Inventors:
Leigh Griffin, Andrea Cosentino, Paolo Antinori
Abstract: In an information handling system that includes one or more PCIe devices, responsive to enumerating a PCIe device and adding the PCIe device to a configuration space of the platform, a mapping entry is added to a device handler mapping table to associate a device handler for the PCIe device with information for accessing the PCIe device. If the PCIe device fails to enumerate in a boot path, a virtual pseudo PCIe (VPP) node corresponding to the PCIe device may be created and enumerated to enable the boot to complete. Upon subsequently detecting and enumerating the actual, physical PCIe device, the VPP node and the PCIe device may be connected to enable the full functionality of the PCIe device without re-booting the platform.
Type:
Grant
Filed:
August 3, 2023
Date of Patent:
March 18, 2025
Assignee:
Dell Products L.P.
Inventors:
Karunakar Poosapalli, Shekar Babu Suryanarayana, Harish Barigi, Alankritha T V
Abstract: Disclosed herein are systems, methods, and apparatuses where a controller can automatically manage a physical infrastructure of a computer system based on a plurality of system rules, a system state for the computer system, and a plurality of templates. Techniques for automatically adding resources such as computer, storage, and/or networking resources to the computer system are described. Also described are techniques for automatically deploying applications and services on such resources. These techniques provide a scalable computer system that can serve as a turnkey scalable private cloud.
Type:
Grant
Filed:
March 15, 2024
Date of Patent:
March 11, 2025
Assignee:
Net-Thunder, LLC
Inventors:
Parker John Schmitt, Sean Michael Richardson, Neil Benjamin Semmel, Cameron Tyler Spry
Abstract: A first command associated with a first memory die is communicated via a first portion of an interface of the memory sub-system. A second command associated with a second memory die is communicated via the first portion of the interface to a second memory die. A data burst corresponding to the first memory die is caused to be communicated via a second portion of the interface, where the second command is communicated via the first portion of the interface concurrently with the data burst communicated via the second portion of the interface.
Abstract: A tablet computer is provided, which includes a sensor section operable to detect positional input by a human operator and output a positional input signal; a display, laid over the sensor section, operable to receive and display a video signal; and a processor, coupled to a memory storing programs for running an operating system (OS) and executing software loaded to the memory, the processor being operable to receive and process the positional input signal from the sensor section and to output a video signal of the OS and the software to the display. The tablet computer further includes a sensor signal filter capable of selectively communicating the positional input signal from the sensor section to the processor, to a separate external processor, or to neither the processor nor the separate external processor; and a display switch capable of coupling the display to the processor or to the separate external processor.
Abstract: An information handling system may include a management controller configured to provide out-of-band management of the information handling system, and a network interface controller comprising a network interface controller storage resource. The management controller may be configured to: receive, from a centralized management platform, information regarding at least one signature associated with a network interface controller operating system (OS) configured to be executed by the network interface controller; and transmit the at least one signature to the network interface controller. The network interface controller may be configured to install the network interface OS to the network interface controller storage resource based on the at least one signature.
Abstract: A data moving method for a direct memory access apparatus is disclosed, and the data moving method for the direct memory access apparatus includes: receiving, by the direct memory access apparatus, an object data moving instruction and decoding the object data moving instruction, wherein the object data moving instruction includes a first field, and the first field of the object data moving instruction indicates a data moving operation for a neural-network processor; and executing, by the direct memory access apparatus, the object data moving instruction.
Abstract: Various systems and circuits are provided. One such system includes input interfaces to receive items of input data of different types; output interfaces, each of a different type; an interconnect coupled to the input interfaces and to the output interfaces; and an multichip hub that includes buffers respectively corresponding to the types of input data, context memory blocks, and a data movement engine with a context mapper to determine a context of each item of input data received and provide the item of input data to a corresponding context memory block. Multiple processing blocks within the multichip hub are each configured to perform a respective processing operation. The data movement engine receives context configuration data to determine, for each item of input data received, which of the multiple processing operations are to be applied to the item of input data.
Abstract: Provided are a Peripheral Component Interconnect Express (PCIe) device and a method of operating the same. The PCIe device may include a performance analyzer, a delay time information generato and a command fetcher. The performance analyzer may measure throughputs of a plurality of functions, and generate throughput analysis information indicating a comparison result between the throughputs of the plurality of functions and throughput limits corresponding to the plurality of functions. The delay time information generator may generate a delay time for delaying a command fetch operation for each of the plurality of functions based on the throughput analysis information. The command fetcher may fetch a target command from a host based on a delay time of a function corresponding to the target command.
Type:
Grant
Filed:
January 3, 2022
Date of Patent:
February 4, 2025
Assignee:
SK hynix Inc.
Inventors:
Yong Tae Jeon, Ji Woon Yang, Sang Hyun Yoon, Se Hyeon Han
Abstract: A medical system includes an input assembly for receiving one or more user inputs. The input assembly includes at least one slider assembly for providing an input signal. Processing logic receives the input signal from the input assembly and provides a first output signal and a second output signal. A display assembly is configured to receive, at least in part, the first output signal from the processing logic and render information viewable by the user. The second output signal is provided to one or more medical system components. The information rendered on the display assembly may be manipulatable by the user and at least a portion of the information rendered may be magnified.
Type:
Grant
Filed:
October 6, 2023
Date of Patent:
January 28, 2025
Assignee:
DEKA PRODUCTS LIMITED PARTNERSHIP
Inventors:
Kevin L. Grant, Douglas J. Young, Matthew C. Harris
Abstract: Example methods and network devices for order-preserving execution of a write request are disclosed. One example method is performed by a network device of a storage node. The example method includes receiving an order-preserving confirmation request, where the order-preserving confirmation request carries a first write address assigned for a first write request. It is confirmed that execution of a second write request for which a second write address has been assigned is not completed, where the second write address precedes the first write address. Sending feedback information for the order-preserving confirmation request is delayed.
Abstract: A heterogeneous processing system including a host processor, a first processor with a first memory and a first data transfer resource, a second processor with a second memory, and switch and bus circuitry that communicatively couples the processors and the data transfer resource. The host processor is programmed to map virtual addresses of the second memory to physical addresses of the switch and bus circuitry and to configure the first processor to perform one memory to memory transfer operation between the first and second memories using the data transfer resource. The first processor may be configured to program the first data transfer resource. A method including mapping virtual addresses of the second memory to physical addresses of the switch and bus circuitry, and configuring the first processor to perform one memory to memory transfer operation between the first and second memories using the first data transfer resource.
Abstract: Described is a method and apparatus for application migration between a dockable device and a docking station in a seamless manner. The dockable device includes a processor and the docking station includes a high-performance processor. The method includes executing at least one application in the dockable device using a first processor, and initiating an application migration for the at least one application from the first processor to a second processor in a docking station responsive to determining that the dockable device is in a docked state, wherein the at least one application continues to execute during the application migration from the first processor to the second processor.
Type:
Grant
Filed:
August 11, 2023
Date of Patent:
January 21, 2025
Assignee:
Advanced Micro Devices, Inc.
Inventors:
Jonathan Lawrence Campbell, Yuping Shen
Abstract: In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.
Type:
Grant
Filed:
December 9, 2022
Date of Patent:
January 21, 2025
Assignee:
NVIDIA Corporation
Inventors:
Ahmad Itani, Yen-Te Shih, Jagadeesh Sankaran, Ravi P Singh, Ching-Yu Hung
Abstract: Devices and methods for managing boot personalities in a network device are disclosed. The method includes, after powering on the network device, a programmable component of the network device outputting a first signal unique to a first boot personality. One or more switches are toggled based on the first signal. The toggling results in connecting at least one of one or more first components in the network device associated with the first boot personality and disconnecting at least one of one or more second components in the network device associated with a second boot personality.