Patents Issued in January 14, 2020
-
Patent number: 10534695Abstract: Methods and systems for initializing test environments comprising receiving input defining a plurality of parameters which are used to identify template configuration information which comprises static configuration information and instances of environment variables. A copy of the template configuration information is created and updated based on one or more of the parameters. The updated information is saved as an environment configuration descriptor that defines one or more services required for the test environment. The descriptor is then used to initialize test environment.Type: GrantFiled: March 30, 2018Date of Patent: January 14, 2020Assignee: Atlassian Pty LtdInventor: Ilia Sadykov
-
Patent number: 10534696Abstract: A computer-implemented method for improving comparative performance test results of mobile applications may include (1) determining an optimum testing configuration for a mobile computing device, (2) directing the mobile computing device to (a) execute a comparative performance test, (b) operate in accordance with the determined optimum testing configuration during the execution of the comparative performance test, and (c) write data generated during the execution of the comparative performance test to a random-access memory (RAM) drive of the mobile computing device, (3) recording a network response directed to the mobile computing device, (4) detecting a subsequent network request sent by the mobile computing device, (5) sending the recorded network response to the mobile computing device in response to detecting the subsequent network request, and (6) tracking a control performance value and a modified performance value during the comparative performance test.Type: GrantFiled: October 23, 2018Date of Patent: January 14, 2020Assignee: Facebook, Inc.Inventors: Joel F. Beales, Jeffrey Scott Dunn, Jia Li, Shai Duvdevani, Scott Kenneth Yost, Donghang Guo, Le Zhang
-
Patent number: 10534697Abstract: Some embodiments provide a non-transitory machine-readable medium that stores a program executable by at least one processing unit of a device. The program receives a test configuration for performing a set of operations on an application. The test configuration includes a first configuration component having a first type and a second configuration component having a second type. The program also processes the first configuration component with a first configuration component processor. The program further processes the second configuration component with a second configuration component processor. The program also performs the set of operations on the application based on the processing of at least one of the first and second configuration components.Type: GrantFiled: October 27, 2015Date of Patent: January 14, 2020Assignee: SAP SEInventor: Wenli Zhang
-
Patent number: 10534698Abstract: A web server, such as one operating with a test agent in a database system receives a request for executing a test. The request is sent by a test master to an endpoint of the web server. In response to receiving the request by the web server, without exchanging information between the test agent and the test master, the test agent performs a series of operations as follows. A complete set of test steps is determined for the test. A complete set of test data used to execute the complete set of test steps is determined. The complete set of test steps for the test is executed with the complete set of test data. A final test execution status is generated for the test. The test agent can make the final test execution status for the test available for the test master to retrieve by way of the web server.Type: GrantFiled: August 24, 2017Date of Patent: January 14, 2020Assignee: salesforce.com, inc.Inventors: Ashish Patel, Christopher Tammariello, Michael Bartoli, Tuhin Kanti Sharma, Vaishali Nandal
-
Patent number: 10534699Abstract: Embodiments of the present disclosure relate to a method and device for executing test cases. The method comprises obtaining a set of test cases to be executed, and determining a test platform type and a test script associated with each test case in the set of test cases based on a knowledge base. The method further comprises dividing the set of test cases into a plurality of test subsets or test suites based on the test platform type, and executing test cases in each test subset using the respective test environment and test script. In embodiments of the present disclosure, the plurality of test suites are generated automatically based on the knowledge base, and the respective test environment and test script are used for executing each test suite. Accordingly, embodiments of the present disclosure can implement automatic generation and execution of the test suites, and can improve the operation efficiency for test cases.Type: GrantFiled: October 29, 2018Date of Patent: January 14, 2020Assignee: EMC IP Holding Company LLCInventors: Shuo Lv, Deric Wenjun Wang
-
Patent number: 10534700Abstract: Example implementations relate to separating verifications from test executions. Some implementations may include a data capture engine that captures data points during test executions of the application under test. The data points may include, for example, application data, test data, and environment data. Additionally, some implementations may include a data correlation engine that correlates each of the data points with a particular test execution state of the application under test based on a sequence of events that occurred during the particular test execution state. Furthermore, some implementations may also include a test verification engine that, based on the correlation of the data points, verifies an actual behavior of the application under test separately from the particular test execution state.Type: GrantFiled: December 9, 2014Date of Patent: January 14, 2020Assignee: MICRO FOCUS LLCInventors: Inbar Shani, Ilan Shufer, Amichai Nitsan
-
Patent number: 10534701Abstract: A system for providing an API-driven continuous test platform is disclosed. The system may include one or more processors, a test engine, one or more test agents, and a database. The system may prepare (according to a configuration file) a first test configuration comprising a first selection of the one or more test agents, execute (using the test engine) the first test configuration to produce one or more first test results, and store (using the database) the one or more first test results. Finally, the system may process (using a continuous integration and continuous delivery (CI/CD) pipeline) the first test results by performing at least one of the following CI/CD processes: updating a central code base of an enterprise production environment, rejecting at least one code snippet processed by the test engine during execution of the first test configuration, and marking the first test results as inconclusive.Type: GrantFiled: June 17, 2019Date of Patent: January 14, 2020Assignee: CAPITAL ONE SERVICES, LLCInventors: Govind Pande, Pritam Sarkar, Agnibrata Nayak, Theodore Kayes, Sunil Palla, Mark Mikkelson, Pradosh Sivadoss
-
Patent number: 10534702Abstract: An information processing method to be executed by a processor executing instructions in a memory, the information processing method includes allocating, in a first area of the storage area, an area having a predetermined size to an application, determining whether an processing area to be used when processing of the application is executed in the first area, and upon condition that it is determined that the processing is able to be reserved in the first area, reserving the processing area in the first area as the allocated area having the predetermined size to an application, and upon condition that it is determined that the processing area is not able to be reserved in the first area, trying to reserve the processing area in a second area in the storage area, and performing notification upon condition that the processing is not able to be reserved in the second area.Type: GrantFiled: September 27, 2016Date of Patent: January 14, 2020Assignee: Canon Kabushiki KaishaInventor: Takao Ikuno
-
Patent number: 10534703Abstract: A memory system may include a nonvolatile memory device including a plurality of blocks each including a plurality of pages, and a controller that selects a mapping block from the plurality of blocks, stores address information corresponding to each of other blocks, except for the mapping block and a free block among the plurality of blocks, in each of the plurality of pages, searches for a block including no valid page among the other blocks, and invalidates a page of the mapping block storing the address information corresponding to the searched block.Type: GrantFiled: March 4, 2016Date of Patent: January 14, 2020Assignee: SK hynix Inc.Inventor: Jong-Min Lee
-
Patent number: 10534704Abstract: A controller includes a memory suitable for storing valid data of first data in a first data region and storing second data in a second data region, wherein the first data includes the valid data and dummy data; a translation unit suitable for performing a first translation operation of changing the first data to the valid data by eliminating the dummy data from the first data, performing a second translation operation of changing the valid data to the first data by adding the dummy data to the valid data, and exchanging the valid data with the memory; and a processor suitable for exchanging the first data with the translation unit, and exchanging the second data with the memory.Type: GrantFiled: June 29, 2017Date of Patent: January 14, 2020Assignee: SK hynix Inc.Inventors: Byeong-Gyu Park, Kyu-Min Lee
-
Patent number: 10534705Abstract: A memory system includes: a memory device that includes a plurality of memory blocks each of which includes a plurality of pages for storing data; and a controller that includes a first memory, wherein the controller performs a foreground operation and a background operation onto the memory blocks, checks priorities and weights for the foreground operation and the background operation, schedules queues corresponding to the foreground operation and the background operation based on the priorities and the weights, allocates regions corresponding to the scheduled queues to the first memory, and performs the foreground operation and the background operation through the regions allocated to the first memory.Type: GrantFiled: March 5, 2018Date of Patent: January 14, 2020Assignee: SK hynix Inc.Inventor: Jong-Min Lee
-
Patent number: 10534706Abstract: A system for data management includes a root object, zero, one, or more member objects, and a processor. The root object is associated with a garbage collection root metadata. Each of the one or more member objects is associated with a garbage collection member metadata. The root object has a direct relation or an indirect relation to each of the one or more member objects. The processor is to determine that an object is the root object; determine whether the root object and the one or more member objects are to be garbage collected; and garbage collect the root object and the one or more member objects in the event that the root object and the one or more member objects are to be garbage collected.Type: GrantFiled: December 15, 2015Date of Patent: January 14, 2020Assignee: Workday, Inc.Inventors: Seamus Donohue, Sergio Mendiola Cruz, Ken Pugsley, John Levey, Gerald Green, Iacopo Pace
-
Patent number: 10534707Abstract: The present disclosure provides a technique of suppressing competition of processes in a semiconductor device employing a multilayer bus configuration. A semiconductor device employing a multilayer bus configuration includes a control device controlling an access from each of bus maters to each memory, and a storage device for storing a corresponding relation between identification information identifying a storage region included in each memory and a group to which the storage region belongs.Type: GrantFiled: August 9, 2018Date of Patent: January 14, 2020Assignee: RENESAS ELECTRONICS CORPORATIONInventor: Takashi Yamaguchi
-
Patent number: 10534708Abstract: Embodiments relate to efficiently replicating data from a source storage space to a target storage space. The storage spaces share a common namespace of paths where content units are stored. A shallow cache is maintained for the target storage space. Each entry in the cache includes a hash of a content unit in the target storage space and associated hierarchy paths in the target storage space where the corresponding content unit is stored. When a set of content units in the source storage space is to be replicated at the target storage space, any content unit with a hash in the cache is replicated from one of the associated paths in the cache, thus avoiding having to replicate content from the source storage space.Type: GrantFiled: June 25, 2018Date of Patent: January 14, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ross Neal Barker, Adrian Sufaru
-
Patent number: 10534709Abstract: A data storage device includes a write cache, a non-volatile memory and a controller coupled to the write cache and to the non-volatile memory. The controller is configured to, responsive to receiving a plurality of flush commands, write all data from the write cache to the non-volatile memory while executing fewer than all of the plurality of flush commands.Type: GrantFiled: August 31, 2016Date of Patent: January 14, 2020Assignee: SanDisk Technologies LLCInventors: Hadas Oshinsky, Rotem Sela, Amir Shaharabany
-
Patent number: 10534710Abstract: In embodiments, an apparatus may include a CC, and a LLC coupled to the CC, the CC to reserve a defined portion of the LLC where data objects whose home location is in a NVM are given placement priority. In embodiments, the apparatus may be further coupled to at least one lower level cache and a second LLC, wherein the CC may further identify modified data objects in the at least one lower level cache whose home location is in a second NVM, and in response to the identification, cause the modified data objects to be written from the lower level cache to the second LLC, the second LLC located in a same socket as the second NVM.Type: GrantFiled: June 22, 2018Date of Patent: January 14, 2020Assignee: INTEL CORPORATIONInventors: Kshitij Doshi, Bhanu Shankar
-
Patent number: 10534711Abstract: Replicating a primary application cache that serves a primary application on one network node into a secondary application cache that serves a secondary application on a second network node. Cache portions that are within the primary application cache are identified, and then identifiers (but not the cache portions) are transferred to the second network node. Once these identifiers are received, the cache portions that they identify may then be retrieved into the secondary application caches. This process may be repeatedly performed such that the secondary application cache moves towards the same state as the primary application cache though the state of the primary application cache also changes as the primary application operates by receiving read and write requests.Type: GrantFiled: November 29, 2018Date of Patent: January 14, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Nikhil Teletia, Jae Young Do, Kwanghyun Park, Jignesh M. Patel
-
Patent number: 10534712Abstract: A method for service level agreement (SLA) allocation of resources of a cache memory of a storage system, the method may include monitoring, by a control layer of the storage system, actual performances of the storage system that are related to multiple logical volumes; calculating actual-to-required relationships between the actual performances and SLA defined performances of the multiple logical volumes; assigning caching priorities, to different logical volumes of the multiple logical volumes; wherein the assigning is based on, at least, the actual-to-required relationships; and managing, based on at least the caching priorities, a pre-cache memory module that is upstream to the cache module and is configured to store write requests that (i) are associated with one or more logical volumes of the different logical volumes and (ii) are received by the pre-cache memory module at points in time when the cache memory is full; wherein the managing comprises transferring one or more write requests from the pre-caType: GrantFiled: August 29, 2016Date of Patent: January 14, 2020Assignee: INFINIDAT LTD.Inventors: Qun Fan, Venu Nayar, Haim Helman
-
Patent number: 10534713Abstract: Modifying prefetch request processing. A prefetch request is received by a local computer from a remote computer. The local computer responds to a determination that execution of the prefetch request is predicted to cause an address conflict during an execution of a transaction of the local processor by determining an evaluation of the prefetch request prior to execution of the program instructions included in the prefetch request. The evaluation is based, at least in part, on (i) a comparison of a priority of the prefetch request with a priority of the transaction and (ii) a condition that exists in one or both of the local processor and the remote processor. Based on the evaluation, the local computer modifies program instructions that govern execution of the program instructions included in the prefetch request.Type: GrantFiled: October 30, 2018Date of Patent: January 14, 2020Assignee: International Business Machines CorporationInventors: Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
-
Patent number: 10534714Abstract: Systems, methods, and software described herein allocate cache memory to job processes executing on a processing node. In one example, a method of allocating cache memory to a plurality of job process includes identifying the plurality of job processes executing on a processing node, and identifying a data object to be accessed by the plurality of job processes. The method further provides allocating a portion of the cache memory to each job process in the plurality of job processes and, for each job process in the plurality of job processes, identifying a segment of data from the data object, wherein the segment of data comprises a requested portion of data and a predicted portion of data. The method also includes providing the segments of data to the allocated portions of the cache memory.Type: GrantFiled: December 18, 2014Date of Patent: January 14, 2020Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Michael J. Moretti, Joel Baxter, Thomas Phelan
-
Patent number: 10534715Abstract: Operation of a multi-slice processor that includes a plurality of execution slices, a plurality of load/store slices, and one or more page walk caches, where operation includes: receiving, at a load/store slice, an instruction to be issued; determining, at the load/store slice, a process type indicating a source of the instruction to be a host process or a guest process; and determining, in accordance with an allocation policy and in dependence upon the process type, an allocation of an entry of the page walk cache, wherein the page walk cache comprises one or more entries for both host processes and guest processes.Type: GrantFiled: April 22, 2016Date of Patent: January 14, 2020Assignee: International Business Machines CorporationInventors: Dwain A. Hicks, Jonathan H. Raymond, George W. Rohrbaugh, III, Shih-Hsiung S. Tung
-
Patent number: 10534716Abstract: A hybrid data storage device disclosed herein includes a main data store, one or more data storage caches, and a data storage cache management sub-system. The hybrid data storage device is configured to limit write operations on the one or more data storage caches to less than an endurance value for the data storage cache. In one implementation, the data storage cache management sub-system limits or denies requests for promotion of data from the main data store to the one or more data storage caches. In another implementation, the data storage cache management sub-system limits garbage collection operations on the data storage cache.Type: GrantFiled: July 13, 2016Date of Patent: January 14, 2020Assignee: SEAGATE TECHNOLOGY LLCInventors: Sumanth Jannyavula Venkata, Mark A. Gaertner, Jonathan G. Backman
-
Patent number: 10534718Abstract: An example apparatus for memory addressing can include an array of memory cells. The apparatus can include a memory cache configured to store at least a portion of an address mapping table. The address mapping table can include a number of regions corresponding to respective amounts of logical address space of the array. The address mapping table can map translation units (TUs) to physical locations in the array. Each one of the number of regions can include a first table. The first table can include entries corresponding to respective TU logical address of the respective amounts of logical address space, respective pointers, and respective offsets. Each one of the number of regions can include a second table. The second table can include entries corresponding to respective physical address ranges of the array. The entries of the second table can include respective physical address fields and corresponding respective count fields.Type: GrantFiled: July 31, 2017Date of Patent: January 14, 2020Assignee: Micron Technology, Inc.Inventor: Jonathan M. Haswell
-
Patent number: 10534719Abstract: A data processing network includes a network of devices addressable via a system address space, the network including a computing device configured to execute an application in a virtual address space. A virtual-to-system address translation circuit is configured to translate a virtual address to a system address. A memory node controller has a first interface to a data resource addressable via a physical address space, a second interface to the computing device, and a system-to-physical address translation circuit, configured to translate a system address in the system address space to a corresponding physical address in the physical address space of the data resource. The virtual-to-system mapping may be a range table buffer configured to retrieve a range table entry comprising an offset address of a range together with a virtual address base and an indicator of the extent of the range.Type: GrantFiled: November 21, 2017Date of Patent: January 14, 2020Assignee: Arm LimitedInventors: Jonathan Curtis Beard, Roxana Rusitoru, Curtis Glenn Dunham
-
Patent number: 10534720Abstract: Memory management in a computer system may include allocating memory pages from a physical memory of the computer system to applications executing on the computer system. The memory pages may be associated with memory management tags. One or more memory pages may be identified for processing from the physical memory based on the memory management tags that the memory pages are associated with. The processed memory pages may then be designated as un-allocated memory pages for subsequent allocation to applications executing on the computing system.Type: GrantFiled: May 26, 2016Date of Patent: January 14, 2020Assignee: VMware, Inc.Inventors: Chiao-Chuan Shih, Samdeep Nayak
-
Patent number: 10534721Abstract: A cache controller determines replacement priority for cache lines at a cache based on data stored at non-cache buffers. In response to determining that a cache line at the cache is to be replaced, the cache controller identifies a set of candidate cache lines for replacement. The cache controller probes the non-cache buffers to identify any entries that are assigned to the same memory address as a candidate cache line and adjusts the replacement priorities for the candidate cache lines based on the probe responses. The cache controller deprioritizes for replacement cache lines associated with entries of the non-cache buffers.Type: GrantFiled: October 23, 2017Date of Patent: January 14, 2020Assignee: Advanced Micro Devices, Inc.Inventor: Paul Moyer
-
Patent number: 10534723Abstract: A system, method and computer program product are provided for conditionally eliminating a memory read request. In use, a memory read request is identified. Additionally, it is determined whether the memory read request is an unnecessary memory read request. Further, the memory read request is conditionally eliminated, based on the determination.Type: GrantFiled: June 23, 2017Date of Patent: January 14, 2020Assignee: Mentor Graphics CorporationInventors: Nikhil Tripathi, Venky Ramachandran, Malay Haldar, Sumit Roy, Anmol Mathur, Abhishek Roy, Mohit Kumar
-
Patent number: 10534724Abstract: Instructions and logic support suspending and resuming migration of enclaves in a secure enclave page cache (EPC). An EPC stores a secure domain control structure (SDCS) in storage accessible by an enclave for a management process, and by a domain of enclaves. A second processor checks if a corresponding version array (VA) page is bound to the SDCS, and if so: increments a version counter in the SDCS for the page, performs an authenticated encryption of the page from the EPC using the version counter in the SDCS, and writes the encrypted page to external memory. A second processor checks if a corresponding VA page is bound to a second SDCS of the second processor, and if so: performs an authenticated decryption of the page using a version counter in the second SDCS, and loads the decrypted page to the EPC in the second processor if authentication passes.Type: GrantFiled: December 24, 2015Date of Patent: January 14, 2020Assignee: Intel CorporationInventors: Carlos V. Rozas, Ilya Alexandrovich, Gilbert Neiger, Francis X. McKeen, Ittai Anati, Vedvyas Shanbhogue, Mona Vij, Rebekah Leslie-Hurd, Krystof C. Zmudzinski, Somnath Chakrabarti, Vincent R. Scarlata, Simon P. Johnson
-
Patent number: 10534725Abstract: Technology for decrypting and using a security module in a processor cache in a secure mode such that dynamic address translation prevents access to portions of the volatile memory outside of a secret store in a volatile memory.Type: GrantFiled: July 25, 2017Date of Patent: January 14, 2020Assignee: International Business Machines CorporationInventors: Angel Nunez Mencias, Jakob C. Lang, Martin Recktenwald, Ulrich Mayer
-
Patent number: 10534726Abstract: Systems and methods for providing object versioning in a storage system may support the logical deletion of stored objects. In response to a delete operation specifying both a user key and a version identifier, the storage system may permanently delete the specified version of an object having the specified key. In response to a delete operation specifying a user key, but not a version identifier, the storage system may create a delete marker object that does not contain object data, and may generate a new version identifier for the delete marker. The delete marker may be stored as the latest object version of the user key, and may be addressable in the storage system using a composite key comprising the user key and the new version identifier. Subsequent attempts to retrieve the user key without specifying a version identifier may return an error, although the object was not actually deleted.Type: GrantFiled: May 14, 2018Date of Patent: January 14, 2020Assignee: Amazon Technologies, Inc.Inventors: Jason G. McHugh, Praveen Kumar Gattu, Michael A. Ten-Pow, Derek Ernest Denny-Brown, II
-
Patent number: 10534727Abstract: Method and apparatus for handling page protection faults in combination particularly with the dynamic conversion of binary code executable by a one computing platform into binary code executed instead by another computing platform. In one exemplary aspect, a page protection fault handling unit is used to detect memory accesses, to check page protection information relevant to the detected access by examining the contents of a page descriptor store, and to selectively allow the access or pass on page protection fault information in accordance with the page protection information.Type: GrantFiled: October 11, 2016Date of Patent: January 14, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Simon Murray, Geraint M. North
-
Patent number: 10534728Abstract: A method may include, in an information handling system comprising a processor and a management controller communicatively coupled to the processor and configured to provide management of the information handling system, executing by the management controller a management application for management of one or more storage resources of the information handling system, determining by the management controller whether one or more processor-attached storage resources are present in the information handling system, wherein the one or more processor-attached storage resources are coupled to the processor by other than a backplane of the information handling system, and responsive to determining that one or more processor-attached storage resources are present, executing by the management controller an adaptable virtual backplane that emulates a physical backplane to the management application as if the physical backplane were interfaced between the management application and the processor-attached storage resources.Type: GrantFiled: April 5, 2018Date of Patent: January 14, 2020Assignee: Dell Products L.P.Inventors: Chandrasekhar Mugunda, Yogesh P. Kulkarni, Balaji Bapu Gururaja Rao, Shivabasava Karibasa Komaranalli, Robert R. Leyendecker
-
Patent number: 10534729Abstract: An inter-die data transfer system includes a receiver circuit in a receiver die coupled to a sender circuit in a sender die through a bus. The receiver circuit includes a safe sample selection circuit and a latency adjustment circuit. The safe sample selection circuit receives from the sender circuit a plurality of training data signals, and determines a safe sample selection signal for a first bit of the bus. The latency adjustment circuit determines a latency adjustment selection signal for the first bit of the bus. A user data safe sample is selected using the safe sample selection signal from a plurality of user data samples associated with a first user data input signal associated with the first bit of the bus. Latency adjustment is performed to the user data safe sample to generate a first user data output signal using the latency adjustment selection signal.Type: GrantFiled: May 14, 2018Date of Patent: January 14, 2020Assignee: XILINX, INC.Inventors: Pongstorn Maidee, Theepan Moorthy
-
Patent number: 10534730Abstract: A first processor that has a trusted relationship with a trusted memory region (TMR) that includes a first region for storing microcode used to execute a microcontroller on a second processor and a second region for storing data associated with the microcontroller. The microcontroller supports a virtual function that is executed on the second processor. An access controller is configured by the first processor to selectively provide the microcontroller with access to the TMR based on whether the request is to write in the first region. The access controller grants read requests from the microcontroller to read from the first region and denies write requests from the microcontroller to write to the first region. The access controller grants requests from the microcontroller to read from the second region or write to the second region.Type: GrantFiled: December 20, 2018Date of Patent: January 14, 2020Assignee: ATI Technologies ULCInventors: Kathirkamanathan Nadarajah, Anthony Asaro
-
Patent number: 10534731Abstract: The present disclosure includes an interface for memory having a cache and multiple independent arrays. An embodiment includes a memory device having a cache and a plurality independent memory arrays, a controller, and an interface configured to communicate a plurality of commands from the controller to the memory device, wherein the interface includes a pin configured to activate upon a first one of the plurality of commands being received by the memory device and deactivate once all of the plurality of commands have been executed by the memory device.Type: GrantFiled: March 19, 2018Date of Patent: January 14, 2020Assignee: Micron Technology, Inc.Inventors: Dionisio Minopoli, Gianfranco Ferrante, Antonino CaprÃ, Emanuele Confalonieri, Daniele Balluchi
-
Exposing memory-mapped IO devices to drivers by emulating PCI bus and PCI device configuration space
Patent number: 10534732Abstract: Devices are emulated as PCI devices so that existing PCI drivers can be used for the devices. This is accomplished by creating a shim PCI device with a emulated PCI configuration space, accessed via a emulated PCI Extended Configuration Access Mechanism (ECAM) space which is emulated by accesses to trapped unbacked memory addresses. When system software accesses the PCI ECAM space to probe for PCI configuration data or program base address registers of the PCI ECAM space, an exception is raised and the exception is handled by a secure monitor that is executing at a higher privilege level than the system software. The secure monitor in handling the exception emulates the PCI configuration space access of the emulated PCI device corresponding to the ECAM address accessed, such that system software may discover the device and bind and appropriately configure a PCI driver to it with the right IRQ and memory base ranges.Type: GrantFiled: June 29, 2015Date of Patent: January 14, 2020Assignee: VMware, Inc.Inventors: Andrei Warkentin, Harvey Tuch, Alexander Fainkichen -
Patent number: 10534733Abstract: Techniques for configuring a system may include selecting one of a plurality of I/O slots to be allocated a number of lanes connected to a processor; and responsive to selecting the one I/O slot, sending a selection signal to a multiplexer that selects the one I/O slot from the plurality of I/O slots and configures the number of lanes for use by the one I/O slot where the number of lanes connect the one I/O slot to the processor. The system may be a data storage system and the lanes may be PCIe lanes used for data transmission. For each I/O slot, an I/O module may be inserted, removed or replaced (e.g., removed and then replaced with a new I/O card). A management controller may select the one I/O slot and send the selection signal in accordance with one or more policies. The system may support hot plug I/O modules.Type: GrantFiled: April 26, 2018Date of Patent: January 14, 2020Assignee: EMC IP Holding Company LLCInventors: Walter A. O'Brien, III, Matthew J. Borsini
-
Patent number: 10534734Abstract: A processor/endpoint communication coupling configuration system includes a plurality of processing subsystems coupled to a multi-endpoint adapter device by a plurality of communication couplings included on at least one hardware subsystem. A communication coupling configuration engine identifies each at least one hardware subsystem, determines at least one communication coupling configuration capability of the plurality of communication couplings, and determines at least one first multi-endpoint adapter device capability of the multi-endpoint adapter device.Type: GrantFiled: April 26, 2019Date of Patent: January 14, 2020Assignee: Dell Products L.P.Inventors: Timothy M. Lambert, Hendrich M. Hernandez, Yogesh Varma, Kurtis John Bowman, Shyamkumar T. Iyer, John Christopher Beckett
-
Patent number: 10534735Abstract: In a virtualized computer system in which a guest operating system runs on a virtual machine of a virtualized computer system, a computer-implemented method of providing the guest operating system with direct access to a hardware device coupled to the virtualized computer system via a communication interface, the method including: (a) obtaining first configuration register information corresponding to the hardware device, the hardware device connected to the virtualized computer system via the communication interface; (b) creating a passthrough device by copying at least part of the first configuration register information to generate second configuration register information corresponding to the passthrough device; and (c) enabling the guest operating system to directly access the hardware device corresponding to the passthrough device by providing access to the second configuration register information of the passthrough device.Type: GrantFiled: April 23, 2018Date of Patent: January 14, 2020Assignee: VMware, Inc.Inventors: Mallik Mahalingam, Michael Nelson
-
Patent number: 10534736Abstract: A system includes a display subsystem. The display subsystem includes a shared buffer having allocated portions, each allocated to one of a plurality of display threads, each display thread associated with a display peripheral. The display subsystem also includes a direct memory access (DMA) engine configured to receive a request from a main processor to deallocate an amount of space from a first allocated portion associated with a first display thread. In response to receiving the request, the DMA engine deallocates the amount of space from the first allocated portion and shifts the allocated portions of at least some of other display threads to maintain contiguity of the allocated portions and concatenate free space at an end of the shared buffer.Type: GrantFiled: December 31, 2018Date of Patent: January 14, 2020Assignee: TEXAS INSTRUMENTS INCORPORATEDInventors: Anish Reghunath, Brian Chae, Jay Scott Salinger, Chunheng Luo
-
Patent number: 10534737Abstract: A method for accelerating distributed stream processing. The method includes allocating a hardware accelerator to a topology for distributed stream processing. The topology includes a spout and a plurality of bolts. The spout is configured to prepare a plurality of tuples. The plurality of bolts are configured to process the plurality of tuples and include at least one proxy bolt. The proxy bolt is configured to perform a proxy operation on an input tuple of the plurality of tuples. The method further includes obtaining a customized hardware accelerator by customizing the hardware accelerator based on the proxy operation, sending the input tuple from the proxy bolt to the customized hardware accelerator, generating an output tuple of the plurality of tuples by performing the proxy operation on the input tuple in the customized hardware accelerator, and sending the output tuple from the customized hardware accelerator to the proxy bolt.Type: GrantFiled: April 29, 2019Date of Patent: January 14, 2020Inventors: Nima Kavand, Armin Darjani, Hamid Nasiri Bezenjani, Maziar Goudarzi
-
Patent number: 10534738Abstract: A system includes a host interface, a storage interface, and one or more control circuits coupled to the host interface and coupled to the storage interface. The one or more control circuits include a common set of registers configured to maintain first entries according to a first storage protocol for first storage devices connected to the storage interface and to maintain second entries according to a second storage protocol for second storage devices connected to the storage interface.Type: GrantFiled: January 17, 2018Date of Patent: January 14, 2020Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.Inventors: Kumar Ranjan, Sunny Koul
-
Patent number: 10534739Abstract: A bus between a requester and a target component includes a portion dedicated to carry information indicating a privilege level, from among a plurality of privilege levels, of machine-readable instructions executed on the requester.Type: GrantFiled: October 31, 2014Date of Patent: January 14, 2020Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Maugan Villatel, David Plaquin, Chris I. Dalton
-
Patent number: 10534740Abstract: Provided is a multi-master collision prevention system including: a plurality of functional blocks including a plurality of external modules and a plurality of internal modules performing different functions; a plurality of interfaces respectively connected to the plurality of external modules, respectively; a plurality of dedicated registers including priority information of the plurality of functional blocks and connected to the plurality of functional blocks, respectively; a common block selectively connected to the plurality of functional blocks, and configured to function as a master for controlling the common blocks when the plurality of functional blocks are connected to the common block; and a priority determination unit configured to determine a connection between any one of the plurality of functional blocks and the common block.Type: GrantFiled: November 29, 2018Date of Patent: January 14, 2020Assignee: HYUNDAI AUTRON CO., LTD.Inventors: Kee Beom Kim, Young Suk Kim
-
Patent number: 10534741Abstract: Example implementations relate to transmitting signals via USB ports. For example, a system according to the present disclosure, may include a host module including a plurality of USB ports, a first expansion module, and a second expansion module. The first expansion module may include a first USB port and a second USB port. The first expansion module may receive a signal from the host module at a first USB port, and direct the signal to a second USB port. The first expansion module may transmit the signal to a second expansion module via a second USB port.Type: GrantFiled: July 13, 2016Date of Patent: January 14, 2020Assignee: Hewlett-Packard Development Company, L.P.Inventors: Chi So, Nam H Nguyen, Chien-Hao Lu, Roger D Benson
-
Patent number: 10534742Abstract: A system and method for enabling hot-plugging of devices in virtualized systems. A hypervisor obtains respective values representing respective quantities of a resource for a plurality of virtual root buses of a virtual machine (VM). The hypervisor determines a first set of address ranges of the resource that are allocated for one or more virtual devices attached to at least one of the plurality of virtual root buses. The hypervisor determines, in view of the first set of allocated address ranges, a second set of address ranges of the resource available for attaching one or more additional virtual devices to at least one of the plurality of virtual root buses. The hypervisor assigns to the plurality of virtual root buses non-overlapping respective address ranges of the resource within the second set.Type: GrantFiled: October 23, 2018Date of Patent: January 14, 2020Assignee: RED HAT ISRAEL, LTD.Inventors: Marcel Apfelbaum, Michael Tsirkin
-
Patent number: 10534743Abstract: A device and method for providing performance information about a processing device. A stream of performance data is generated by one or more devices whose performance is reflected in the performance data. This performance data stream is then provided to a parallel port for outputting thereof.Type: GrantFiled: October 16, 2014Date of Patent: January 14, 2020Assignee: Advanced Micro Devices, Inc.Inventor: Elizabeth Morrow Cooper
-
Patent number: 10534744Abstract: A method, system and computer-usable medium are disclosed for performing a network traffic combination operation. With the network traffic combination operation, a plurality of input queues are defined by an operating system for an adapter based upon workload type (e.g., as determined by a transport layer). Additionally, the operating system defines each input queue to match a virtual memory architecture of the transport layer (e.g., one input queue is defined as 31 bit and other input queue is defined as 64 bit). When data is received off the wire as inbound data from a physical NIC, the network adapter associates the inbound data with the appropriate memory type. Thus, data copies are eliminated and memory consumption and associated storage management operations are reduced for the smaller bit architecture communications while allowing the operating system to continue executing in a larger bit architecture configuration.Type: GrantFiled: August 10, 2015Date of Patent: January 14, 2020Assignee: International Business Machines CorporationInventors: Patrick G. Brown, Michael J. Fox, Jeffrey D. Haggar, Jerry W. Stevens
-
Patent number: 10534746Abstract: A system and method for issuing commands to remote devices comprising determining a criterion that forms a rule for a service, the service comprising a service property, a service method, and a service event, distributing the rule to a behavior engine on a programmable device, the behavior engine comprising a set of rules, and evaluating, at the behavior engine, if a trigger criterion for the rule is met. Upon determining that the trigger criterion is met, the method may further comprise performing an action comprising evaluating, at the behavior engine, if a condition is met, and upon determining that the condition is met, issuing a command to perform a first action comprising setting a service property and calling a service method for all devices including the service property within a scope of the action, defining an action scope.Type: GrantFiled: May 9, 2017Date of Patent: January 14, 2020Assignee: Droplit, Inc.Inventors: Bryan Jenks, Nikolas Doukellis, Christopher Woodle
-
Patent number: 10534747Abstract: Technologies for providing a scalable architecture to efficiently perform compute operations in memory include a memory having media access circuitry coupled to a memory media. The media access circuitry is to access data from the memory media to perform a requested operation, perform, with each of multiple compute logic units included in the media access circuitry, the requested operation concurrently on the accessed data, and write, to the memory media, resultant data produced from execution of the requested operation.Type: GrantFiled: March 29, 2019Date of Patent: January 14, 2020Assignee: Intel CorporationInventors: Shigeki Tomishima, Srikanth Srinivasan, Chetan Chauhan, Rajesh Sundaram, Jawad B. Khan