Patents Issued in January 2, 2024
-
Patent number: 11860771Abstract: A remote device infrastructure can be used to test and develop applications and websites. A user developer can select a number of remote devices at a remote location and test a programming application from a local machine. The remote devices run the programming application. The user interacts with mirrored displays of the remote devices on the local machine of the user. User inputs are transmitted to a remote device. The user can also enable a multisession mode, where the user can test a programming application on multiple remote devices and observe a display output of each remote device on the local machine of the user. The user can interact with any mirrored display of a remote devices in a multisession and observe a synced output in the other mirrored displays.Type: GrantFiled: September 26, 2022Date of Patent: January 2, 2024Assignee: BrowserStack LimitedInventors: Ritik Jain, Abhinav Dube, Suyash Yogeshwar Sonawane
-
Patent number: 11860772Abstract: Test cases written to test a software application can be dynamically distributed among different sets of test cases that can be executed simultaneously in different parallel threads, thereby speeding up testing relative to executing the test cases sequentially in a single thread. To avoid database conflicts that may occur when different test cases in different parallel threads attempt to access the same database simultaneously, testing of the software application can be performed in association with a record-locking database that locks database records individually instead of locking entire database tables or locking data structures that are larger than individual records. Locking individual database records can reduce and/or eliminate the chances that a test case in one parallel thread will be unable to access a record in the database because another test case in another parallel thread is simultaneously accessing the same database.Type: GrantFiled: November 28, 2022Date of Patent: January 2, 2024Assignee: State Farm Mutual Automobile Insurance CompanyInventors: Shaktiraj Chauhan, Nate Shepherd
-
Patent number: 11860773Abstract: Systems, apparatuses, and methods related to memory access statistics monitoring are described. A host is configured to map pages of memory for applications to a number of memory devices coupled thereto. A first memory device comprises a monitoring component configured to monitor access statistics of pages of memory mapped to the first memory device. A second memory device does not include a monitoring component capable of monitoring access statistics of pages of memory mapped thereto. The host is configured to map a portion of pages of memory for an application to the first memory device in order to obtain access statistics corresponding to the portion of pages of memory upon execution of the application despite there being space available on the second memory device and adjust mappings of the pages of memory for the application based on the obtained access statistics corresponding to the portion of pages.Type: GrantFiled: February 3, 2022Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventor: David A. Roberts
-
Patent number: 11860774Abstract: An access method of a nonvolatile memory device included in a user device includes receiving a write request to write data into the nonvolatile memory device; detecting an application issuing the write request, a user context, a queue size of a write buffer, an attribute of the write-requested data, or an operation mode of the user device; and deciding one of a plurality of write modes to use for writing the write-requested data into the nonvolatile memory device according to the detected information. The write modes have different program voltages and verify voltage sets.Type: GrantFiled: July 28, 2021Date of Patent: January 2, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sangkwon Moon, Kyung Ho Kim, Seunguk Shin, Sung Won Jung
-
Patent number: 11860775Abstract: The invention relates to a method, and an apparatus for programming data into flash memory. The method includes: driving, by the routing engine, a host interface (I/F) according to the front-end parameter set when determining that a front-end processing stage needs to be activated for the data-programming transaction; driving, by the accelerator, a Redundant Array of Independent Disks (RAID) engine according to the mid-end parameter set when receiving an activation message of the data-programming transaction from the routing engine and determining that a mid-end processing stage needs to be activated; and driving, by the accelerator, a data access engine according to the back-end parameter set when determining that the mid-end processing stage for the data-write transaction does not need to be activated or the mid-end processing stage for the data-write transaction has been completed, and a back-end processing stage for the data-write transaction needs to be activated.Type: GrantFiled: August 2, 2022Date of Patent: January 2, 2024Assignee: Silicon Motion, Inc.Inventor: Shen-Ting Chiu
-
Patent number: 11860776Abstract: The present memory restoration system enables a collection of computing systems to prepare inactive rewritable memory for reserve and future replacement of other memory while the other memory is active and available for access by a user of the computing system. The preparation of the reserved memory part is performed off-line in a manner that is isolated from the current user of the active memory part. Preparation of memory includes erasure of data, reconfiguration, etc. The memory restoration system allows for simple exchange of the reserved memory part, once the active memory part is returned. The previously active memory may be concurrently recycled for future reuse in this same manner to become a reserved memory. This enables the computing collection infrastructure to “swap” to what was previously the inactive memory part when a user vacates a server, speeding up the server wipe process.Type: GrantFiled: January 27, 2023Date of Patent: January 2, 2024Assignee: Oracle International CorporationInventors: Tyler Vrooman, Graham Schwinn, Greg Edvenson
-
Patent number: 11860777Abstract: A memory management method of a storage device including: programming write-requested data in a memory block; counting an elapse time from a time when a last page of the memory block was programmed with the write-requested data; triggering a garbage collection of the storage device when the elapse time exceeds a threshold value; and programming valid data collected by the garbage collection at a first clean page of the memory block.Type: GrantFiled: July 16, 2020Date of Patent: January 2, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Kyungduk Lee, Young-Seop Shim
-
Patent number: 11860778Abstract: One example method includes scanning, at a cloud storage site, metadata associated with an object stored at the cloud storage site, fetching, from the metadata, an object creation time for the object, and determining whether the object is out of a minimum storage duration. When the object is out of the minimum storage duration, it is copy-forwarded and then marked for deletion, and when the object is not out of the minimum storage duration, the object is deselected from a list of objects to be copied forward.Type: GrantFiled: October 26, 2021Date of Patent: January 2, 2024Assignee: EMC IP HOLDING COMPANY LLCInventors: Kalyan C. Gunda, Jagannathdas Rath
-
Patent number: 11860779Abstract: An onboard relay device comprises memory and a processor connected to the memory. The processor is configured to receive a data transmission request from a request originator, determine whether received data that is data received from a transmission originator is variable data or fixed data, record the received data in a cache memory section temporarily in cases in which the received data is the fixed data, determine whether or not transmission request data that is data subject to a transmission request from the request originator is recorded in the cache memory section, and in cases in which the processor has determined the transmission request data to be the received data recorded in the cache memory section, transmit the received data recorded in the cache memory section to the request originator as the transmission request data.Type: GrantFiled: November 12, 2021Date of Patent: January 2, 2024Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHAInventors: Akiyoshi Yamada, Masatoshi Ishino
-
Patent number: 11860780Abstract: A method of cache management, the method comprising: identifying, among a plurality of storage items, storage items having an access count above a first threshold to generate a set of storage items; identifying, among the set of storage items, storage items having an updated access count above a second threshold to generate a subset of storage items, wherein, for each storage item, the updated access count is dependent upon a number of accesses subsequent to generating the set of storage items; and adding the storage items of the subset of storage items to a cache.Type: GrantFiled: January 28, 2022Date of Patent: January 2, 2024Assignee: PURE STORAGE, INC.Inventors: Ethan Miller, John Colgrove
-
Patent number: 11860781Abstract: A write cleaner circuit can be used to implement write-through (WT) functionality by a write-back (WB) cache memory for updating the system memory. The write cleaner circuit can intercept memory write transactions issued to the WB cache memory and generate clean requests that can enable the WB cache memory to send update requests to corresponding memory locations in the system memory around the same time as the memory write transactions are performed by the WB cache memory, and clear dirty bits in the cache lines corresponding to those memory write transactions.Type: GrantFiled: May 4, 2022Date of Patent: January 2, 2024Assignee: Amazon Technologies, Inc.Inventors: Moshe Raz, Guy Nakibly, Gal Avisar
-
Patent number: 11860782Abstract: In some embodiments, an integrated circuit may include a substrate and a memory array disposed on the substrate, where the memory array includes a plurality of discrete memory banks. The integrated circuit may also include a processing array disposed on the substrate, where the processing array includes a plurality of processor subunits, each one of the plurality of processor subunits being associated with one or more discrete memory banks among the plurality of discrete memory banks. The integrated circuit may also include a controller configured to implement at least one security measure with respect to an operation of the integrated circuit and take one or more remedial actions if the at least one security measure is triggered.Type: GrantFiled: February 9, 2022Date of Patent: January 2, 2024Assignee: NeuroBlade Ltd.Inventors: Eliad Hillel, Elad Sity, David Shamir, Shany Braudo
-
Patent number: 11860783Abstract: Systems and methods related to direct swap caching with noisy neighbor mitigation and dynamic address range assignment are described. A system includes a host operating system (OS), configured to support a first set of tenants associated with a compute node, where the host OS has access to: (1) a first swappable range of memory addresses associated with a near memory and (2) a second swappable range of memory addresses associated with a far memory. The host OS is configured to allocate memory in a granular fashion such that each allocation of memory to a tenant includes memory addresses corresponding to a conflict set having a conflict set size. The conflict set includes a first conflicting region associated with the first swappable range of memory addresses with the near memory and a second conflicting region associated with the second swappable range of memory addresses with the far memory.Type: GrantFiled: May 3, 2022Date of Patent: January 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Ishwar Agarwal, Yevgeniy Bak, Lisa Ru-feng Hsu
-
Patent number: 11860784Abstract: A technique for operating a cache is disclosed. The technique includes recording access data for a first set of memory accesses of a first frame; identifying parameters for a second set of memory accesses of a second frame subsequent to the first frame, based on the access data; and applying the parameters to the second set of memory accesses.Type: GrantFiled: June 27, 2022Date of Patent: January 2, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Christopher J. Brennan, Akshay Lahiry
-
Patent number: 11860785Abstract: A method and system for efficiently executing a delegate of a program by a processor coupled to an external memory. A payload including state data or command data is bound with a program delegate. The payload is mapped with the delegate via the payload identifier. The payload is pushed to a repository buffer in the external memory. The payload is flushed by reading the payload identifier and loading the payload from the repository buffer. The delegate is executed using the loaded payload.Type: GrantFiled: October 19, 2022Date of Patent: January 2, 2024Assignee: Oxide Interactive, Inc.Inventor: Timothy James Kipp
-
Patent number: 11860786Abstract: A cache system, having: a first cache; a second cache; a configurable data bit; and a logic circuit coupled to a processor to control the caches based on the configurable bit. When the configurable bit is in a first state, the logic circuit is configured to: implement commands for accessing a memory system via the first cache, when an execution type is a first type; and implement commands for accessing the memory system via the second cache, when the execution type is a second type. When the configurable data bit is in a second state, the logic circuit is configured to: implement commands for accessing the memory system via the second cache, when the execution type is the first type; and implement commands for accessing the memory system via the first cache, when the execution type is the second type.Type: GrantFiled: December 13, 2021Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventor: Steven Jeffrey Wallach
-
Patent number: 11860787Abstract: Methods, devices, and systems for retrieving information based on cache miss prediction. A prediction that a cache lookup for the information will miss a cache is made based on a history table. The cache lookup for the information is performed based on the request. A main memory fetch for the information is begun before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations, the prediction includes comparing a first set of bits stored in the history table with a second set of bits stored in the history table. In some implementations, the prediction includes comparing at least a portion of an address of the request for the information with a set of bits in the history table.Type: GrantFiled: September 30, 2021Date of Patent: January 2, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Ciji Isen, Paul J. Moyer
-
Patent number: 11860788Abstract: Data can be prefetched in a distributed storage system. For example, a computing device can receive a message with metadata associated with at least one request for an input/output operation from a message queue. The computing device can determine, based on the message from the message queue, an additional IO operation predicted to be requested by a client subsequent to the at least one request for the IO operation. The computing device can send a notification to a storage node of a plurality of storage nodes associated with the additional IO operation for prefetching data of the additional IO operation prior to the client requesting the additional IO operation.Type: GrantFiled: September 8, 2021Date of Patent: January 2, 2024Assignee: Red Hat, Inc.Inventors: Gabriel Zvi BenHanokh, Yehoshua Salomon
-
Patent number: 11860789Abstract: A cache purge simulation system includes a device under test with a cache skip switch. A first cache skip switch includes a configurable state register to indicate whether all of an associated cache is purged upon receipt of a cache purge instruction from a verification system or whether a physical partition that is smaller than the associated cache is purged upon receipt of the cache purge instruction from the verification system. A second cache skip switch includes a configurable start address register comprising a start address that indicates a beginning storage location of a physical partition of an associated cache and a configurable stop address register comprising a stop address that indicates a ending storage location of the physical partition of the associated cache.Type: GrantFiled: March 21, 2022Date of Patent: January 2, 2024Assignee: International Business Machines CorporationInventors: Yvo Thomas Bernard Mulder, Ralf Ludewig, Huiyuan Xing, Ulrich Mayer
-
Patent number: 11860790Abstract: A streaming engine employed in a digital data processor specifies a fixed read only data stream defined by plural nested loops. An address generator produces address of data elements. A steam head register stores data elements next to be supplied to functional units for use as operands. An element duplication unit optionally duplicates data element an instruction specified number of times. A vector masking unit limits data elements received from the element duplication unit to least significant bits within an instruction specified vector length. If the vector length is less than a stream head register size, the vector masking unit stores all 0's in excess lanes of the stream head register (group duplication disabled) or stores duplicate copies of the least significant bits in excess lanes of the stream head register.Type: GrantFiled: August 31, 2021Date of Patent: January 2, 2024Assignee: Texas Instruments IncorporatedInventor: Joseph Zbiciak
-
Patent number: 11860791Abstract: The disclosed technology relates to determining physical zone data within a zoned namespace solid state drive (SSD), associated with logical zone data included in a first received input-output operation based on a mapping data structure within a namespace of the zoned namespace SSD. A second input-output operation specific to the determined physical zone data is generated wherein the second input-output operation and the received input-output operation is of a same type. The generated second input-output operation is completed using the determined physical zone data within the zoned namespace SSD.Type: GrantFiled: April 24, 2020Date of Patent: January 2, 2024Assignee: NETAPP, INC.Inventors: Abhijeet Prakash Gole, Rohit Shankar Singh, Douglas P. Doucette, Ratnesh Gupta, Sourav Sen, Prathamesh Deshpande
-
Patent number: 11860792Abstract: Systems and methods for memory management for virtual machines. An example method may include receiving, by a host computing system, a memory access request initiated by a peripheral component interconnect (PCI) device, wherein the memory access request comprises a memory address and an address translation flag specifying an address space associated with the memory address; and responsive to determining that the address translation flag is set to a first value indicating a host address space, causing a host system input/output memory management unit (IOMMU) to pass-through the memory access request.Type: GrantFiled: May 4, 2021Date of Patent: January 2, 2024Assignee: Red Hat, Inc.Inventor: Michael Tsirkin
-
Patent number: 11860793Abstract: A controller is provided. The controller creates a page table including page table entries including mapping information for translating a virtual address to a physical address. Each of the page table entries includes: a virtual page number, a physical frame number, valid information, and size information. The virtual page number is included in a virtual address, the physical frame number is included in a physical address, the valid information includes a first predetermined number of bits, and the size information includes a second predetermined number of bits. The first predetermined number of bits represents an address translation range in a page table entry or a number of page table entries to be grouped, and the size information represents a size indicated by each bit of the first predetermined number of bits.Type: GrantFiled: November 15, 2021Date of Patent: January 2, 2024Assignees: SAMSUNG ELECTRONICS CO., LTD., INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YONSEI UNIVERSITYInventors: Seongil O, Won Woo Ro, William Jinho Song, Jiwon Lee
-
Patent number: 11860794Abstract: Each PIPT L2 cache entry is uniquely identified by a set index and a way and holds a generational identifier (GENID). The L2 detects a miss of a physical memory line address (PMLA). An L2 set index is obtained from the PMLA. The L2 picks a way for replacement, increments the GENID held in the entry in the picked way of the selected set, and forms a physical address proxy (PAP) for the PMLA with the obtained set index and the picked way. The PAP uniquely identifies the picked L2 entry. The L2 forms a generational PAP (GPAP) for the PMLA with the PAP and the incremented GENID. A load/store unit makes available the GPAP as a proxy of the PMLA for comparisons with GPAPs of other PMLAs, rather than making comparisons of the PMLA itself with the other PMLAs, to determine whether the PMLA matches the other PMLAs.Type: GrantFiled: May 18, 2022Date of Patent: January 2, 2024Assignee: Ventana Micro Systems Inc.Inventors: John G. Favor, Srivatsan Srinivasan, Robert Haskell Utley
-
Patent number: 11860795Abstract: Device, system, and method of determining memory requirements and tracking memory usage. A method includes: dynamically modifying, in an iterative process including two or more iterations, a maximum size of Random Access Memory (RAM) that a Memory Protection Unit (MPU) authorizes an executable program code to access. In each iteration, the method includes running that executable program code while the MPU enforces a different maximum size of RAM, and monitoring whether the executable program code attempted to access a RAM memory address that is beyond that maximum size of RAM in that iteration. Based on such iterations, the method determines a minimum size of RAM that is required for that executable program code to run without causing a memory access fault.Type: GrantFiled: February 18, 2020Date of Patent: January 2, 2024Assignee: ARM LIMITEDInventors: Itay Zacay, Adi Kachal, Roee Friedman, Dvir Shalom Marcovici, Uri Eliyahu
-
Patent number: 11860796Abstract: Embodiments described herein provide techniques to manage drivers in a user space in a data processing system. One embodiment provides a data processing system configured perform operations, comprising discovering a hardware device communicatively coupled to the communication bus, launching a user space driver daemon, establishing an inter-process communication (IPC) link between a first proxy interface for the user space driver daemon and a second proxy interface for a server process in a kernel space, receiving, at the first proxy interface, an access right to enable access to a memory buffer in the kernel space, and relaying an access request for the memory buffer from the user space driver daemon via a third-party proxy interface to enable the user space driver daemon to access the memory buffer, the access request based on the access right.Type: GrantFiled: August 9, 2021Date of Patent: January 2, 2024Assignee: Apple Inc.Inventors: Jeremy C. Andrus, Joseph R. Auricchio, Russell A. Blaine, Daniel A. Chimene, Simon M. Douglas, Landon J. Fuller, Yevgen Goryachok, John K. Kim-Biggs, Arnold S. Liu, James M. Magee, Daniel A. Steffen, Roberto G. Yepez
-
Patent number: 11860797Abstract: Restricting peripheral device protocols in confidential compute architectures, the method including: receiving a first address translation request from a peripheral device supporting a first protocol, wherein the first protocol supports cache coherency between the peripheral device and a processor cache; determining that a confidential compute architecture is enabled; and providing, in response to the first address translation request, a response including an indication to the peripheral device to not use the first protocol.Type: GrantFiled: December 30, 2021Date of Patent: January 2, 2024Assignees: ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULCInventors: Philip Ng, Nippon Raval, David A. Kaplan, Donald P. Matthews, Jr.
-
Patent number: 11860798Abstract: Aspects disclosed herein relate to a method comprising: obtaining a list of data paths to at least one persistent storage device through a plurality of NUMA nodes; associating with each data path, access performance information; receiving a request to access one of the at least one persistent storage device; calculating a preferred data path to the one of the at least one persistent storage device using the access performance information; and accessing the one of the at least one persistent storage device using the preferred data path.Type: GrantFiled: January 21, 2022Date of Patent: January 2, 2024Assignee: Nyriad, Inc.Inventors: Stuart John Inglis, Leon Wiremu Macrae Oud, Dominic Joseph Michael Houston Azaris, Jack Spencer Turpitt
-
Patent number: 11860799Abstract: Described apparatuses and methods enable a receiver of requests, such as a memory device, to modulate the arrival of future requests using a credit-based communication protocol. A transmitter of requests can be authorized to transmit a request responsive to possession of a credit corresponding to the communication request. In these situations, if the transmitter has exhausted a supply of credits, the transmitter waits until a credit is returned before transmitting another request. The receiver of the requests can manage credit returns based on whether a request queue has space to receive another request. Further, the receiver can delay a credit return based on how many requests are pending at the receiver, even if space is available in the request queue. This delay can prevent an oversupply of requests from developing downstream of the request queue. Latency, for instance, can be improved by managing the presence of requests that are downstream.Type: GrantFiled: December 20, 2021Date of Patent: January 2, 2024Assignee: Micron Technologies, Inc.Inventors: Nikesh Agarwal, Chandana Manjula Linganna
-
Patent number: 11860800Abstract: A reconfigurable compute fabric can include multiple nodes, and each node can include multiple tiles with respective processing and storage elements. Compute kernels can be parsed into directed graphs and mapped to particular node or tile resources for execution. In an example, a branch-and-bound search algorithm can be used to perform the mapping. The algorithm can use a cost function to evaluate the resources based on capability, occupancy, or power consumption of the various node or tile resources.Type: GrantFiled: August 20, 2021Date of Patent: January 2, 2024Assignee: Micron Technology, Inc.Inventors: Gongyu Wang, Jason Eckhardt
-
Patent number: 11860801Abstract: A method for implicit addressing includes providing within a first unit and a second unit respectively a counter unit, a comparison unit and a storing unit for the storage of an identifier, allocating a first identifier to the first unit, allocating a second identifier to the second unit setting the same counter value in the counter units of both units, after setting the counter values comparing the counter value in the first unit to the first identifier and comparing the counter value in the second unit to the second identifier, based on equality of the comparison in the first unit sending of first data from the first unit or-assigning of first data to the first unit, based on inequality of the comparison in the second unit no sending or assigning of data to the second unit, and counting up or down the counter value in both units.Type: GrantFiled: January 15, 2019Date of Patent: January 2, 2024Inventor: Christoph Heldeis
-
Patent number: 11860802Abstract: In accordance with some aspects of the present disclosure, a non-transitory computer readable medium is disclosed. In some embodiments, the non-transitory computer readable medium includes instructions that, when executed by a processor, cause the processor to receive, from a workload hosted on a host of a cluster, first I/O traffic programmed according to a first I/O traffic protocol supported by a cluster-wide storage fabric exposed to the workload as being hosted on the same host. In some embodiments, the workload is recovered by a hypervisor hosted on the same host. In some embodiments, the non-transitory computer readable medium includes the instructions that, when executed by the processor, cause the processor to adapt the first I/O traffic to generate second I/O traffic programmed according to a second I/O traffic protocol supported by a repository external to the storage fabric and forward the second I/O traffic to the repository.Type: GrantFiled: February 18, 2022Date of Patent: January 2, 2024Assignee: Nutanix, Inc.Inventors: Dezhou Jiang, Kiran Tatiparthi, Monil Devang Shah, Mukul Sharma, Prakash Narayanasamy, Praveen Kumar Padia, Sagi Sai Sruthi, Deepak Narayan
-
Patent number: 11860803Abstract: A memory device includes a buffer die configured to receive a first broadcast command and a second broadcast command from an external device; and a plurality of core dies stacked on the buffer die. The plurality of core dies include: a first core die including a first processing circuit, a first memory cell array, a first command decoder configured to decode the first broadcast command, and a first data input/output circuit configured to output data of the first memory cell array to a common data input/output bus under control of the first command decoder; and a second core die including a second processing circuit, a second memory cell array, a second command decoder configured to decode the second broadcast command, and a second data input/output circuit configured to receive the data of the first memory cell array through the common data input/output bus under control of the second command decoder.Type: GrantFiled: March 3, 2022Date of Patent: January 2, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sang-Hyuk Kwon, Nam Sung Kim, Kyomin Sohn, Jaeyoun Youn
-
Patent number: 11860804Abstract: A direct memory access (DMA) controller, an electronic device that uses the DMA controller, and a method of operating the DMA controller are provided. The DMA controller is configured to access a memory that contains a privilege area and a normal area. The method of operating the DMA controller includes the following steps: searching for a DMA channel that is in an idle state in the DMA controller; setting a register value of a mode register of the DMA channel such that the DMA channel operates in a privilege mode; setting a memory address register and a byte count register of the DMA channel; and controlling the DMA channel to transfer data based on the memory address register and the byte count register.Type: GrantFiled: July 1, 2021Date of Patent: January 2, 2024Assignee: REALTEK SEMICONDUCTOR CORPORATIONInventors: Chen-Tung Lin, Yue-Feng Chen
-
Patent number: 11860805Abstract: The present disclosure provides a terminal device includes: a first Type-C interface the first Type-C interface includes a first group of pins and a second group of pins. In a case where grounding impedance value of the first group of pins is within a preset range, a controller controls the first switch unit to connect the first group of pins to the application processor earphone interface, and controls the second switch unit to connect the second group of pins to the application processor fast charge interface; or in a case where the grounding impedance value of the second group of pins is within the preset range, the controller controls the second switch unit to connect the second group of pins to the application processor earphone interface, and controls the first switch unit to connect the first group of pins to the application processor fast charge interface.Type: GrantFiled: June 21, 2021Date of Patent: January 2, 2024Assignee: VIVO MOBILE COMMUNICATION CO., LTD.Inventor: Yewei Huang
-
Patent number: 11860806Abstract: A microcontroller system comprising a master microcontroller unit, a further module and a general purpose input/output. In a first state the general purpose input/output is controlled by the master microcontroller unit and in a second state the general purpose input/output is controlled by the further module. The master microcontroller unit is arranged to transmit a selection signal which changes the state of the general purpose input/output.Type: GrantFiled: June 19, 2020Date of Patent: January 2, 2024Assignee: Nordic Semiconductor ASAInventors: Anders Nore, Ronan Barzic, Fredrik Jacobsen Fagerheim
-
Patent number: 11860807Abstract: Disclosed are a USB data communication method and device based on a hybrid USB Network. The USB data communication method based on a hybrid USB Network includes following steps executed by the docking station terminal: obtaining a USB data monitoring command carrying an operation mode; when the operation mode is an automatic mode, monitoring a data communication status of a USB input and output interface; when the data communication status is a no input and output information status, monitoring a data of a network input data interface of a network module in the docking station terminal; when the network input data interface obtains a data sending request sent by a client terminal via the hybrid USB Network, in which the data sending request includes network data and a target transmission device, converting the network data into a USB communication data via a soft switching module in the docking station terminal.Type: GrantFiled: May 26, 2023Date of Patent: January 2, 2024Assignee: Winstars Technology LtdInventors: Chun Lee, Wei Nie
-
Patent number: 11860808Abstract: A system includes a fabric switch including a motherboard, a baseboard management controller (BMC), a network switch configured to transport network signals, and a PCIe switch configured to transport PCIe signals; a midplane; and a plurality of device ports. Each of the plurality of device ports is configured to connect a storage device to the motherboard of the fabric switch over the midplane and carry the network signals and the PCIe signals over the midplane. The storage device is configurable in multiple modes based a protocol established over a fabric connection between the system and the storage device.Type: GrantFiled: October 5, 2020Date of Patent: January 2, 2024Inventors: Sompong Paul Olarig, Fred Worley, Son Pham
-
Patent number: 11860809Abstract: A computing device includes: a housing defining an exterior of the computing device; a controller supported within the housing; a first communication port disposed on the exterior; a second communication port disposed on the exterior; a port-sharing subsystem supported within the housing, having (i) a first state to connect the controller with the first communication port, exclusive of the second communication port, and (ii) a second state to connect the controller with the first communication port and the second communication port; the controller configured to: detect engagement of an external device with the first communication port; obtain connection parameters from the external device; based on the connection parameters, set the port-sharing subsystem in either the first state or the second state; and establish a connection to the external device via the port-sharing subsystem and the first communication port.Type: GrantFiled: December 3, 2021Date of Patent: January 2, 2024Assignee: Zebra Technologies CorporationInventor: Michael Robustelli
-
Patent number: 11860810Abstract: The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a reconfigurable logic region. The reconfigurable logic region can include logic blocks that are configurable to implement application logic. The host logic can be used for encapsulating the reconfigurable logic region. The host logic can include a host interface for communicating with a processor. The host logic can include a management function accessible via the host interface. The management function can be adapted to cause the reconfigurable logic region to be configured with the application logic in response to an authorized request from the host interface. The host logic can include a data path function accessible via the host interface. The data path function can include a layer for formatting data transfers between the host interface and the application logic.Type: GrantFiled: September 23, 2022Date of Patent: January 2, 2024Assignee: Amazon Technologies, Inc.Inventors: Islam Atta, Christopher Joseph Pettey, Asif Khan, Robert Michael Johnson, Mark Bradley Davis, Erez Izenberg, Nafea Bshara, Kypros Constantinides
-
Patent number: 11860811Abstract: The present disclosure provides a system and methods for transferring data across an interconnect. One method includes, at a request node, receiving, from a source high speed serial controller, a write request from a source, dividing the write request into sequences of smaller write requests each having a last identifier, and sending, to a home node, the sequences of smaller write requests; and, at the home node, sending, to a destination high speed serial controller, the sequences of smaller write requests for assembly into intermediate write requests that are transmitted to a destination. Each sequence of smaller write requests is assembled into an intermediate write request based on the last identifier.Type: GrantFiled: March 23, 2022Date of Patent: January 2, 2024Assignee: Arm LimitedInventors: Arthur Brian Laughton, Tessil Thomas, Jacob Joseph
-
Patent number: 11860812Abstract: Aspects of the embodiments are directed to systems and methods for performing link training using stored and retrieved equalization parameters obtained from a previous equalization procedure. As part of a link training sequence, links interconnecting an upstream port with a downstream port and with any intervening retimers, can undergo an equalization procedure. The equalization parameter values from each system component, including the upstream port, downstream port, and retimer(s) can be stored in a nonvolatile memory. During a subsequent link training process, the equalization parameter values stored in the nonvolatile memory can be written to registers associated with the upstream port, downstream port, and retimer(s) to be used to operate the interconnecting links. The equalization parameter values can be used instead of performing a new equalization procedure or can be used as a starting point to reduce latency associated with equalization procedures.Type: GrantFiled: November 14, 2017Date of Patent: January 2, 2024Inventor: Debendra Das Sharma
-
Patent number: 11860813Abstract: A method of processing memory instructions including receiving a memory related command from a client system in communication with a memory appliance via a communication protocol, wherein the memory appliance comprises a processor, a memory unit controller and a plurality of memory devices coupled to said memory unit controller. The memory related command is translated by the processor into a plurality of commands that are formatted to perform prescribed data manipulation operations on data of the plurality of memory devices stored in data structures. The plurality of primitive commands is executed on data stored in the memory devices to produce a result, wherein the executing is performed by the memory unit controller. A direct memory transfer of the result is established over the communication protocol to a network.Type: GrantFiled: September 23, 2021Date of Patent: January 2, 2024Assignee: Rambus Inc.Inventors: Keith Lowery, Vlad Fruchter
-
Patent number: 11860814Abstract: A scalable multi-stage hypercube-based interconnection network with deterministic communication between two or more processing elements (“PEs”) or processing cores (“PCs”) arranged in a 2D-grid using vertical and horizontal buses (i.e., each bus is one or more wires) is disclosed. In one embodiment the buses are connected in pyramid network configuration. At each PE, the interconnection network comprises one or more switches (“interconnect”) with each switch concurrently capable to send and receive packets from one PE to another PE through the bus connected between them. Each packet comprises data token, routing information such as source and destination addresses of PEs and other information. Each PE, in addition to interconnect, comprises a processor and/or memory. In one embodiment the processor is a Central Processing Unit (“CPU”) comprises functional units that perform such as additions, multiplications, or logical operations, for executing computer programs.Type: GrantFiled: November 1, 2021Date of Patent: January 2, 2024Assignee: Konda Technologies Inc.Inventor: Venkat Konda
-
Patent number: 11860815Abstract: A reconfigurable computing platform includes a reconfigurable computing device, electro-optical transceiver, and first voltage converter disposed on a multilayer board. The electro-optical transceiver converts an optical signal at least one of to and from an electrical signal, and the electrical signal is operatively coupled to the reconfigurable computing device. The electro-optical transceiver is disposed in proximity to the reconfigurable computing device, and the first voltage converter is operatively coupled to a common voltage distributed around a periphery of the multilayer board. The first voltage converter converts the common voltage to a first operating voltage, and the first voltage converter is disposed in proximity to the reconfigurable computing device. The first operating voltage is provided to the reconfigurable computing device as a first power source.Type: GrantFiled: October 4, 2019Date of Patent: January 2, 2024Assignee: Brookhaven Science Associates, LLCInventors: Shaochun Tang, Michael Begel, Hucheng Chen, Helio Takai, Francesco Lanni
-
Patent number: 11860816Abstract: Data items are archived by separating them into two or more data streams according to common characteristics or categories. Data item properties, including custodian and date properties, are defined for the items in each stream. A record manifest, including metadata corresponding to the data item properties for the stream, is created. The data items and the manifest are stored. The data items are indexed only on demand, and only to the extent necessary to satisfy the demand. When data is restored from archival storage, it is combined with the stub in a manner that treats the stub and stored data as complementary parts, thus preserving any changes to the stub that are not reflected in the archive copy.Type: GrantFiled: May 30, 2017Date of Patent: January 2, 2024Assignee: Archive360, LLCInventors: Tiberiu Popp, Nick Czeczulin, Robert Desteno
-
Patent number: 11860817Abstract: In some examples, a data management system generates snapshots in a distributed file system based on a protocol or a user triggered event, The data management system identifies a snappable file in a distributed file system and a first data block in the snappable file, the first data block including data and attribute data. The system scans an index file to access the attribute data of the first data block and initiates construction of a patch file based on the accessed attribute data. The system repeats the scanning of the index file to access attribute data of at least a further second data block, the second data block including data and attribute data, and completes construction of the patch file based on the accessed attribute data of the first and second data blocks. The system generates conversion simulation information by collecting attribute data for all the data blocks of the constructed patch file, and writes the simulation information to a patch file image.Type: GrantFiled: July 19, 2021Date of Patent: January 2, 2024Assignee: Rubrik, Inc.Inventors: Abdullah Reza, Vijay Karthik, Nitin Rathor, Vaibhav Gosain, Anshul Gupta
-
Patent number: 11860818Abstract: A system and method include receiving, by a database engine of a database system associated with a virtual computing system, a user request via a dashboard for provisioning a source database with the database system, receiving, by the database engine via the dashboard, selection of a database engine type, and receiving, by the database engine via the dashboard, selection of a Service Level Agreement (“SLA”) and a protection schedule. The system and method also include provisioning, by the database engine, the source database based upon the database engine type, creating, by the database engine, an instance of a database protection system based upon the SLA and the protection schedule, including associating the instance of the database protection system with the source database, and displaying, by the database engine, the source database within the dashboard.Type: GrantFiled: February 23, 2023Date of Patent: January 2, 2024Assignee: Nutanix, Inc.Inventors: Balasubrahmanyam Kuchibhotla, Kamaldeep Khanuja, Jeremy Launier, Sujit Menon, Maneesh Rawat
-
Patent number: 11860819Abstract: A distributed database may comprise a plurality of nodes maintaining a collection of data items indexed by key values. Upon receiving a request to store a data item, a node of the database may be selected based on the node's suitability for storing the data item. The distributed database may generate a key to identify the data item, such that the generated key identifies the data item and comprises information indicative of the selected node. The distributed database may provide the generated key to an application programming interface client in response to the request.Type: GrantFiled: June 29, 2017Date of Patent: January 2, 2024Assignee: Amazon Technologies, Inc.Inventors: Andrew Christopher Chud, Richard Threlkeld
-
Patent number: 11860820Abstract: Processing data through a storage system in a data pipeline including receiving, by the storage system, a dataset from a collector on a data producer, wherein the dataset is disaggregated from metadata for the dataset by the collector; storing the dataset on the storage system; receiving, by the storage system from a data indexer, a request for data from the dataset, wherein the request for the data comprises the metadata gathered by the collector on the data producer; servicing, by the storage system, the request for the data by locating the data using the metadata gathered by the collector on the data producer and received in the request for the data; and receiving, from the data indexer, indexed data indexed using the metadata gathered by the collector on the data producer.Type: GrantFiled: April 3, 2019Date of Patent: January 2, 2024Assignee: PURE STORAGE, INC.Inventors: Ivan Jibaja, Curtis Pullen, Stefan Dorsett, Srinivas Chellappa, Prashant Jaikumar