Patents Issued in April 18, 2024
  • Publication number: 20240126685
    Abstract: Methods, systems, and devices for dynamic voltage supply for memory circuit are described. An apparatus may adjust a supply voltage based on a process corner and a temperature of the memory system. An apparatus may include a memory array and a controller. The controller may determine a first temperature of the apparatus is less than a first temperature threshold at a first time. The controller may transition a voltage supplied to the controller from a first voltage level to a second voltage level based on determining the first temperature is less than the first temperature threshold. The controller may determine a second temperature is greater than a second temperature threshold at a second time. The controller may transition the voltage supplied to the controller from the second voltage level to the first voltage level based on determining the second temperature is greater than the second temperature threshold.
    Type: Application
    Filed: April 27, 2021
    Publication date: April 18, 2024
    Inventors: Hua Tan, Junjun Wang, De Hua Guo
  • Publication number: 20240126686
    Abstract: A system includes a host device, a hardware offload engine, and a non-volatile storage to store on-disk data. The hardware offload engine is represented to the host device as being a storage having a virtual storage capacity, and the host device transmits an offload command to the hardware offload engine as a data write command without requiring kernel changes or special drivers.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Inventors: Ping Zhou, Kan Frankie Fan, Hui Zhang
  • Publication number: 20240126687
    Abstract: An apparatus comprises a processing device configured to initiate garbage collection for data pages stored in local storage of a storage node of a storage system. The processing device is also configured to determine, for a given data page stored in the local storage of the storage node, a validity score characterizing a size of changed data in the given data page, and to compare the validity score for the given data page to at least one designated threshold. The processing device is further configured to update a given page object for the given data page in an object store of persistent storage responsive to a first comparison result, and to generate, in the object store of the persistent storage, a page delta object for the given data page responsive to a second comparison result, the page delta object comprising the changed data in the given data page.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 18, 2024
    Inventors: Doron Tal, Amitai Alkalay
  • Publication number: 20240126688
    Abstract: Techniques for lazy compaction are disclosed, including: selecting, by a garbage collector, multiple regions of a memory for inclusion in a relocation set; populating, by the garbage collector, a lazy free list (LFL) with the multiple regions selected for inclusion in the relocation set; subsequent to populating the LFL: determining, by an allocator, that an ordinary free list managed by the garbage collector is depleted; responsive to determining that the ordinary free list is depleted: selecting a region in the LFL; executing one or more load barriers associated respectively with one or more objects marked as live in the region, each respective load barrier being configured to relocate the associated object from the region if the associated object is still live; subsequent to executing the one or more load barriers: allocating the region.
    Type: Application
    Filed: October 17, 2022
    Publication date: April 18, 2024
    Applicant: Oracle International Corporation
    Inventors: Erik Österlund, Stefan Mats Rikard Karlsson
  • Publication number: 20240126689
    Abstract: A system can determine a first correlation between respective percentages of stored garbage and respective amounts of garbage of a block storage system based on determining the respective amounts of garbage among first blocks of the respective blocks that satisfy respective criterions of the respective percentages of stored garbage. The system can, based on the first correlation, determine a second correlation between an estimated throughput applicable to reclaiming garbage in the block storage system and the respective amounts of garbage of the block storage system. The system can, based on the first correlation and the second correlation and for a specified target reclamation throughput, determine a corresponding first percentage of stored garbage of the respective percentages of stored garbage. The system can perform copy-forward garbage collection on second blocks of the block storage system that satisfy a criterion defined with respect to the first percentage of stored garbage.
    Type: Application
    Filed: October 13, 2022
    Publication date: April 18, 2024
    Inventors: Yi Ye, Kalyan C. Gunda, Ao Sun
  • Publication number: 20240126690
    Abstract: A memory system includes a memory array having a plurality of memory cells; and a controller coupled to the memory array, the controller configured to: designate a storage mode for a target set of memory cells based on valid data in a source block, wherein the target set of memory cells are configured with a capacity to store up to a maximum number of bits per cell, and the storage mode is for dynamically configuring the target set of memory cells in as cache memory that stores a number of bits less per cell than the corresponding maximum capacity.
    Type: Application
    Filed: December 22, 2023
    Publication date: April 18, 2024
    Inventors: Kishore Kumar Muchherla, Peter Feeley, Ashutosh Malshe, Daniel J. Hubbard, Christopher S. Hale, Kevin R. Brandt, Sampath K. Ratnam, Yun Li, Marc S. Hamilton
  • Publication number: 20240126691
    Abstract: Technologies for cryptographic separation of MMIO operations with an accelerator device include a computing device having a processor and an accelerator. The processor establishes a trusted execution environment. The accelerator determines, based on a target memory address, a first memory address range associated with the memory-mapped I/O transaction, generates a second authentication tag using a first cryptographic key from a set of cryptographic keys, wherein the first key is uniquely associated with the first memory address range. An accelerator validator determines whether the first authentication tag matches the second authentication tag, and a memory mapper commits the memory-mapped I/O transaction in response to a determination that the first authentication tag matches the second authentication tag. Other embodiments are described and claimed.
    Type: Application
    Filed: September 7, 2023
    Publication date: April 18, 2024
    Applicant: Intel Corporation
    Inventors: Luis S. Kida, Reshma Lal, Soham Jayesh Desai
  • Publication number: 20240126692
    Abstract: Memory devices and systems with post-packaging master die selection, and associated methods, are disclosed herein. In one embodiment, a memory device includes a plurality of memory dies. Each memory die of the plurality includes a command/address decoder. The command/address decoders are configured to receive command and address signals from external contacts of the memory device. The command/address decoders are also configured, when enabled, to decode the command and address signals and transmit the decoded command and address signals to every other memory die of the plurality. Each memory die further includes circuitry configured to enable, or disable, or both individual command/address decoders of the plurality of memory dies. In some embodiments, the circuitry can enable a command/address decoder of a memory die of the plurality after the plurality of memory dies are packaged into a memory device.
    Type: Application
    Filed: December 26, 2023
    Publication date: April 18, 2024
    Inventors: Evan C. Pearson, John H. Gentry, Michael J. Scott, Greg S. Gatlin, Lael H. Matthews, Anthony M. Geidl, Michael Roth, Markus H. Geiger, Dale H. Hiscock
  • Publication number: 20240126693
    Abstract: Systems and methods of the present disclosure enable intelligent dynamic caching of data by accessing an activity history of historical electronic activity data entries associated with a user account, and utilizing a trained entity relevancy machine learning model to predict a degree of relevance of each entity associated with the historical electronic activity data entries in the activity history based at least in part on model parameters and activity attributes of each electronic activity data entry. A set of relevant entities are determined based at least in part on the degree of relevance of each entity. Pre-cached entities are identified based on pre-cached entity data records cached on the user device, and un-cached relevant entities from the set of relevant entities are identified based on the pre-cached entities. The cache on the user device is updated to cache the un-cached entity data records associated with the un-cached relevant entities.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: Shabnam Kousha, Lin Ni Lisa Cheng, Asher Smith-Rose, Joshua Edwards, Tyler Maiman
  • Publication number: 20240126694
    Abstract: An out-of-order buffer includes an out-of-order queue and a controlling circuit. The out-of-order queue includes a request sequence table and a request storage device. The controlling circuit receives and temporarily stores the plural requests into the out-of-order queue. After the plural requests are transmitted to plural corresponding target devices, the controlling circuit retires the plural requests. The request sequence table contains m×n indicating units. The request sequence table contains m entry indicating rows. Each of the m entry indicating rows contains n indicating units. The request storage device includes m storage units corresponding to the m entry indicating rows in the request sequence table. The state of indicating whether one request is stored in the corresponding storage unit of the m storage units is recoded in the request sequence table. The storage sequence of the plural requests is recoded in the request sequence table.
    Type: Application
    Filed: November 18, 2022
    Publication date: April 18, 2024
    Inventors: Jyun-Yan LI, Po-Hsiang HUANG, Ya-Ting CHEN, Yao-An TSAI, Shu-Wei YI
  • Publication number: 20240126695
    Abstract: Various embodiments are generally directed to virtualized systems. A first guest memory page may be identified based at least in part on a number of accesses to a page table entry for the first guest memory page in a page table by an application executing in a virtual machine (VM) on the processor, the first guest memory page corresponding to a first byte-addressable memory. The execution of the VM and the application on the processor may be paused. The first guest memory page may be migrated to a target memory page in a second byte-addressable memory, the target memory page comprising one of a target host memory page and a target guest memory page, the second byte-addressable memory having an access speed faster than an access speed of the first byte-addressable memory.
    Type: Application
    Filed: December 21, 2023
    Publication date: April 18, 2024
    Applicant: Intel Corporation
    Inventors: Yao Zu DONG, Kun TIAN, Fengguang WU, Jingqi LIU
  • Publication number: 20240126696
    Abstract: A method of operating a memory system is provided. A logical-to-physical (L2P) address mapping table is obtained in response to a data request instruction. Corresponding data is read from a memory device based on the L2P address mapping table. The L2P address mapping table includes a base physical address of continuous first physical addresses corresponding to first logic addresses and a base physical address offset corresponding to the continuous first physical addresses.
    Type: Application
    Filed: December 22, 2023
    Publication date: April 18, 2024
    Inventor: Hua TAN
  • Publication number: 20240126697
    Abstract: Prefetch circuitry generates, based on stream prefetch state information, prefetch requests for prefetching data to at least one cache. Cache control circuitry controls, based on cache policy information associated with cache entries in a given level of cache, at least one of cache entry replacement in the given level of cache, and allocation of data evicted from the given level of cache to a further level of cache. The stream prefetch state information specifies, for at least one stream of addresses, information representing an address access pattern for generating addresses to be specified by a corresponding series of prefetch requests. Cache policy information for at least one prefetched cache entry of the given level of cache (to which data is prefetched for a given stream of addresses) is set to a value dependent on at least one stream property associated with the given stream of addresses.
    Type: Application
    Filed: October 13, 2022
    Publication date: April 18, 2024
    Inventors: Alexander Alfred HORNUNG, Roberto GATTUSO
  • Publication number: 20240126698
    Abstract: Systems, methods, and other embodiments for supporting high availability by using in-memory cache as a database are disclosed. In one embodiment, a system includes an application server that is configured to select a sub-set of data from a remote database that is predicted to be accessed by an application server, wherein the application server includes an in-memory cache. The sub-set of data is reformatted to reduce the size. The in-memory cache is configured to act as a backup database by pre-populating the reformatted sub-set of data into the in-memory cache. In response to detecting the remote database is in an off-line state: the in-memory cache is assigned as a primary database to replace the remote database and subsequent data requests are re-directed from being processed using the remote database to being processed using the in-memory cache.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 18, 2024
    Inventors: Anurag Anand SINHA, Prakhar RASTOGI, Harish Kumar DALMIA
  • Publication number: 20240126699
    Abstract: Provided herein may be a memory system and a host device. The memory system may include a first memory module communicating with a host through a first interface and a second memory module communicating with the host through a second interface. The second memory module may include a memory device configured to store data and a memory controller configured to update at least one of first metadata related to a space-locality and second metadata related to a time-locality based on a result of comparing the numbers of the pages respectively corresponding to a first trigger address and a second trigger address sequentially input from the host, and to prefetch, to the first memory module, the data determined based on the first metadata and the second metadata. The first and second trigger addresses are addresses corresponding to data for which access to the first memory module is missed.
    Type: Application
    Filed: March 27, 2023
    Publication date: April 18, 2024
    Inventor: Sung Woo HYUN
  • Publication number: 20240126700
    Abstract: Systems and methods for object-based data storage are provided. There may be a read/write cache configured to cache objects to be written to an object-based data storage. A document in the read/write cache may have a lock state set to unlocked, thereby allowing the document to be deleted. Or, the document in the read/write cache may have a lock state set to locked, thereby preventing deletion of the document.
    Type: Application
    Filed: December 20, 2023
    Publication date: April 18, 2024
    Inventors: Jeffrey Hibser, Mohammad Amer Ghazal, Steven Engelhardt, Michael R. Gayeski, Brandon Michelsen, Ankit Khandelwal, Ranga Sankar, Robert A. Skinner
  • Publication number: 20240126701
    Abstract: A memory device and methods for operating the same are provided. The memory device includes an array of memory cells, a non-volatile memory, and a controller. The controller is configured to receive a read command to read a data word from an address of the array and decode the address to generate a decoded address. The controller is further configured to retrieve response data from the decoded address of the array, retrieve a location indicia corresponding to the decoded address from the non-volatile memory, and verify that the location indicia corresponds to the address. The controller can optionally be further configured to indicate an error if the location indicia does not correspond to the address.
    Type: Application
    Filed: April 24, 2023
    Publication date: April 18, 2024
    Inventor: Alberto Troia
  • Publication number: 20240126702
    Abstract: Techniques for slicing memory of a hardware processor core by linear address are described.
    Type: Application
    Filed: September 21, 2022
    Publication date: April 18, 2024
    Inventors: Mark Dechene, Ryan Carlson, Sudeepto Majumdar, Rafael Trapani Possignolo, Paula Petrica, Richard Klass, Meenakshi Marathe
  • Publication number: 20240126703
    Abstract: A method includes receiving, by a memory management unit (MMU) comprising a translation lookaside buffer (TLB) and a configuration register, a request from a processor core to directly modify an entry in the TLB. The method also includes, responsive to the configuration register having a first value, operating the MMU in a software-managed mode by modifying the entry in the TLB according to the request. The method further includes, responsive to the configuration register having a second value, operating the MMU in a hardware-managed mode by denying the request.
    Type: Application
    Filed: December 20, 2023
    Publication date: April 18, 2024
    Inventors: Timothy D. ANDERSON, Joseph Raymond Michael ZBICIAK, Kai CHIRCA, Daniel Brad WU
  • Publication number: 20240126704
    Abstract: A data input device includes a first delay line, a second delay line, a detection circuit, and a processing circuit. The detection circuit is configured to detect whether a first output data output to a system circuit deviates from a first detection range and to generate a first deviation signal in response to that the detection circuit detects the first output data deviates from the first detection range. The processing circuit normally takes a first delayed data delayed by the first delay line as the first output data. In response to that the processing circuit receives that the first deviation signal representing the first delayed data deviates from the first detection range, the processing circuit takes a second delayed data delayed by the second delay line as the first output data after the second adjustable delay magnitude of the second delay line is adjusted.
    Type: Application
    Filed: December 21, 2022
    Publication date: April 18, 2024
    Applicant: Inpsytech, Inc.
    Inventors: Wei-Ren Shiue, Jian-Ying Chen
  • Publication number: 20240126705
    Abstract: Techniques for emulating a configuration space may include emulating a set of configuration registers in an integrated circuit device for a set of functions corresponding to a type of peripheral device. The type of peripheral device represented by the integrated circuit device can be modified by changing the set of configuration registers being emulated in the integrated circuit device. Multiple sets of configuration registers can also be emulated to support different virtual machines or different operating systems.
    Type: Application
    Filed: December 13, 2023
    Publication date: April 18, 2024
    Inventors: Nafea Bshara, Adi Habusha, Guy Nakibly, Georgy Machulsky
  • Publication number: 20240126706
    Abstract: Methods for local page writes via pre-staging buffers for resilient buffer pool extensions are performed by computing systems. Compute nodes in database systems insert, update, and query data pages maintained in storage nodes. Data pages cached locally by compute node buffer pools are provided to buffer pool extensions on local disks as pre-copies via staging buffers that store data pages prior to local disk storage. Encryption of data pages occurs at the staging buffers, which allows a less restrictive update latching during the copy process, with page metadata being updated in buffer pool extensions page tables with in-progress states indicating it is not yet written to local disk. When stage buffers are filled, data pages are written to buffer pool extensions and metadata is updated in page tables to indicate available/valid states. Data pages in staging buffers can be read and updated prior to writing to the local disk.
    Type: Application
    Filed: December 8, 2023
    Publication date: April 18, 2024
    Inventors: Rogério RAMOS, Kareem Aladdin GOLAUB, Chaitanya GOTTIPATI, Alejandro Hernandez SAENZ, Raj Kripal DANDAY
  • Publication number: 20240126707
    Abstract: Memory devices, memory systems, and methods of operating memory devices and systems are disclosed in which a single command can trigger a memory device to perform multiple operations, such as a single refresh command that triggers the memory device to both perform a refresh command and to perform a mode register read. One such memory device comprises a memory, a mode register, and circuitry configured, in response to receiving a command to perform a refresh operation at the memory, to perform the refresh operation at the memory, and to perform a read of the mode register. The memory can be a first memory portion, the memory device can comprise a second memory portion, and the circuitry can be further configured, in response to the command, to provide on-die termination at the second memory portion of the memory system during at least a portion of the read of the mode register.
    Type: Application
    Filed: September 27, 2023
    Publication date: April 18, 2024
    Inventors: Matthew A. Prather, Frank F. Ross, Randall J. Rooney
  • Publication number: 20240126708
    Abstract: Techniques in electronic systems, such as in systems comprising a CPU die and one or more external mixed-mode (analog) chips, may provide improvements advantages in one or more of system design, performance, cost, efficiency and programmability. In one embodiment, the CPU die comprises at least one microcontroller CPU and circuitry enabling the at least one CPU to have a full and transparent connectivity to an analog chip as if they are designed as a single chip microcontroller, while the interface design between the two is extremely efficient and with limited in number of wires, yet may provide improved performance without impact to functionality or the software model.
    Type: Application
    Filed: December 12, 2023
    Publication date: April 18, 2024
    Applicant: AyDeeKay LLC dba Indie Semiconductor
    Inventor: Scott David Kee
  • Publication number: 20240126709
    Abstract: The invention provides a direct memory access (DMA) controller. The DMA controller has an address register, a data register and transfer circuitry for transferring data over a bus of a computing system. The DMA controller is configured to use the transfer circuitry to read data over the bus from a memory location having a first memory address, wherein the data comprises a second memory address, and store the second memory address in the address register, and use the transfer circuitry to transfer data over the bus between a memory location having the second memory address, or having a memory address derived from the second memory address, and the data register.
    Type: Application
    Filed: October 11, 2023
    Publication date: April 18, 2024
    Applicant: Nordic Semiconductor ASA
    Inventor: Elvind FYLKESNES
  • Publication number: 20240126710
    Abstract: A semiconductor device includes a bus control circuit that controls access to a slave shared by a plurality of masters. The bus control circuit includes a plurality of priority determination circuits corresponding to the plurality of masters. The priority determination circuit is configured to, when receiving an urgent access from a corresponding master, change a priority level signal included in an access request from the corresponding master to allocate a high priority level for emergency and allocate a low priority level to a master other than the corresponding master.
    Type: Application
    Filed: September 15, 2023
    Publication date: April 18, 2024
    Inventor: Keisuke JO
  • Publication number: 20240126711
    Abstract: A method includes detecting, by a coexistence controller of a system on a chip (SoC), an occurrence of a coexistence event of an SoC component; providing, by the coexistence controller, an indication of the occurrence of the coexistence event to a coexistence coordinator; and changing, by the coexistence controller, an operating point of the SoC from a current operating point to a new operating point responsive to receiving an operating point change request from the coexistence coordinator.
    Type: Application
    Filed: December 12, 2023
    Publication date: April 18, 2024
    Inventors: Eli DEKEL, Yaron ALPERT
  • Publication number: 20240126712
    Abstract: A semiconductor package includes multiple dies that share the same package pin. An output enable register provided on each die is used to select the die that drives an output to the shared pin. A hardware arbitration circuit ensures that two or more dies do not drive an output to the shared pin at the same time.
    Type: Application
    Filed: December 21, 2023
    Publication date: April 18, 2024
    Inventors: YULEI SHEN, TYRONE TUNG HUANG, CHEN-KUAN HONG
  • Publication number: 20240126713
    Abstract: Systems and methods of communicating use device level throttling. Some embodiments relate to a method of communicating in a network. The systems and methods can provide a first communication associated with a device for issuance, issue the first communication if a queue depth value for the device is less than an issued communication value, and listing the first communication on a pend list for the device if a queue depth value for the device is less than the issued communication value.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 18, 2024
    Applicant: Avago Technologies International Sales Pte. Limited
    Inventor: Arun Prakash JANA
  • Publication number: 20240126714
    Abstract: Apparatus and methods are disclosed herein for remote, direct memory access (RDMA) technology that enables direct memory access from one host computer memory to another host computer memory over a physical or virtual computer network according to a number of different RDMA protocols. In one example, a method includes receiving remote direct memory access (RDMA) packets via a network adapter, deriving a protocol index identifying an RDMA protocol used to encode data for an RDMA transaction associated with the RDMA packets, applying the protocol index to a generate RDMA commands from header information in at least one of the received RDMA packets, and performing an RDMA operation using the RDMA commands.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Applicant: Amazon Technologies, Inc.
    Inventors: Erez Izenberg, Leah Shalev, Nafea Bshara, Guy Nakibly, Georgy Machulsky
  • Publication number: 20240126715
    Abstract: A method for replacing at least one hardware assembly of a data processing apparatus includes managing, by a system management software, the data processing apparatus with a service processor and a plurality of hardware assemblies, obtaining, by the system management software, first system configuration data, first system vital product data (SVPD), and first server identity data of the data processing apparatus from the service processor or through the service processor, obtaining, by the system management software, second SVD and second server identity data of the data processing apparatus from the service processor or through the service processor, comparing, by the system management software, the first server identity data with the second server identity data, and configuring, by the system management software, the data processing apparatus based on the first system configuration data and the first SVPD according to a comparison result.
    Type: Application
    Filed: October 11, 2023
    Publication date: April 18, 2024
    Inventors: Ming LEI, Fred Allison BOWER, III, Caihong ZHANG, Jihao ZHANG
  • Publication number: 20240126716
    Abstract: A systolic array includes a plurality of basic computation units arranged in a matrix. A basic computation includes a feature input register configured to store first feature data, a result buffer configured to store first temporary data, a comparator connected to the feature input register and the result buffer, and a control register connected to the feature input register, the result buffer, and the comparator. The comparator is configured to compare the first feature data input with the first temporary data successively. The control register is configured to control the first feature data of the feature input register and the first temporary data to be input to the comparator, output a comparison result to the result buffer and a feature input register of a next basic computation unit, and after sorting, output the first temporary data last stored in the result buffer as a first data result.
    Type: Application
    Filed: January 24, 2023
    Publication date: April 18, 2024
    Inventors: Yu WANG, Junyuan WU
  • Publication number: 20240126717
    Abstract: One embodiment includes a method for quantum-mechanically archiving data. The method includes receiving, by a quantum computing device (QD), a request to store the data. The data may be associated with a file identifier (ID). A set of bits encodes the data in a classical encoding and the set of bits has a first cardinality. In response to receiving the request to store the data, generating, the QD may generate, based on a superdense coding protocol, a quantum-mechanical (QM) encoding of the data via a set of qubits that has a second cardinality that is less than the first cardinality. The QD may cause a generation of a data structure that encodes an association between the file ID and the set of qubits. The QD may further cause a storage of the data structure.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: Leigh Griffin, Stephen Coady
  • Publication number: 20240126718
    Abstract: In some embodiments, apparatuses and methods are provided herein useful to validating migrated data. In some embodiments, there is provided a system for validating migrated data including a control circuit configured to migrate data from a first database platform to a second database platform and validate the migrated data. The control circuit configured to transmit a message indicating a mismatch in response to a determination that a first single aggregated hash value does not match with a second single aggregated hash value.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 18, 2024
    Inventors: Susarla Sitarama S Chakravarthy, Ankit Singh, Pranabh Kumar Thaduri, Kishore Tupili, James T. Motter
  • Publication number: 20240126719
    Abstract: In accordance with an embodiment, described herein is a system and method for use with a data analytics or other computing environment, for on-demand fetching of backend server logs into a frontend environment, such as for example a browser. Such on-demand log fetching can be specific to the working context that is for current session and current request; and can be accomplished by appending a parameter or flag to a current request. For each step associated with an instruction being performed, the method can create a timestamp within one or more log files associated with the instruction; and fetch the one or more log files associated with the instruction. Performance logs are then included with a dashboard response, and logged into the browser's console.
    Type: Application
    Filed: March 2, 2023
    Publication date: April 18, 2024
    Inventor: DEHONG MA
  • Publication number: 20240126720
    Abstract: A computer-implemented method for saving, renaming, or moving a file includes receiving a request to save, rename or move a file, determining real-time context data and meta-data for the file in response to receiving the request to save, rename or move the file, generating a suggested pathname using the real-time context data and presenting the suggested pathname to a user. The suggested pathname may include a folder or directory name and a filename. The method may also include enabling the user to edit and approve the suggested pathname. Examples of context data include a password hint for the file, storage attributes for the file, collaboration data for the file, calendar data for the user, a file naming policy for an organization, real-time IoT data, and a topic determined from content within the file. A corresponding system and computer program product for executing the above method are also disclosed herein.
    Type: Application
    Filed: October 12, 2022
    Publication date: April 18, 2024
    Inventors: Raghuveer Prasad Nagar, Dinesh Kumar Bhudavaram, Jagadesh Ramaswamy Hulugundi, Megha Jain
  • Publication number: 20240126721
    Abstract: Aspects of the present disclosure relate to systems and methods for sorting one or more files hosted by a collaborative application. In one aspect, one or more activity signals associated with one or more files hosted by the collaborative application may be received from a substrate. An activity-based sort order may be determined using at least a combination of the one or more activity signals. The activity-based sort order may be applied to sort the one or more files hosted by the collaborative application for display in a user interface to an activity object of the collaborative application.
    Type: Application
    Filed: December 13, 2023
    Publication date: April 18, 2024
    Inventors: David Adam STEPHENS, Shane Michael CHISM, Nathan Darrel KILE, JR., Angela Kaye ALLISON, Dan ZARZAR, Douglas Lane MILVANEY, Manoj SHARMA
  • Publication number: 20240126722
    Abstract: A method and system for deduplication caching using an unreliable edge resource include acquiring a total storage capacity of all edge servers, searching for candidate cache files by a similarity-based hierarchical clustering (SHC) method, and acquiring file clusters of all the candidate cache files after clustering, where the candidate cache files each include a deduplicated data chunk, and based on the file clusters and an reliability of all of the edge servers, selecting, by a heuristic algorithm, a file cluster from the file clusters to cache to the edge server until a size of cached content reaches the total storage capacity. The present disclosure makes a trade-off between file availability and space efficiency, thereby effectively improving the cache hit rate in the limited edge caching space.
    Type: Application
    Filed: April 24, 2023
    Publication date: April 18, 2024
    Applicant: NATIONAL UNIVERSITY OF DEFENSE TECHNOLOGY
    Inventors: Lailong LUO, Geyao CHENG, Deke GUO, Junxu XIA, Bowen SUN
  • Publication number: 20240126723
    Abstract: Systems and methods are provided to ingest data objects from a flat file server for use in one or more system operations including providing a renderable data object to a user and updating a data item database. As described, the ingestion system includes an ingestion module, a flat file module, a compliance module, and a deduplication module wherein the modules together ingest a flat file data object, parse and process a renderable data object from the flat file data object, and store the renderable data object in a renderable object database.
    Type: Application
    Filed: October 19, 2023
    Publication date: April 18, 2024
    Inventors: Ramya AMANCHARLA, Anthony CALIENDO, Brian David FIELDS, James J. SULLIVAN, Kyle OPPENHEIM, Rajat SHROFF
  • Publication number: 20240126724
    Abstract: The disclosure relates to the field of computers, and particularly to a method, apparatus, electronic device and storage medium for information processing. The method of information processing provided in the present disclosure includes: at a time before the end of a service processing flow about a first file, determining a save strategy for the first file in response to a user operation, so that the first file is saved based on the save strategy after the end of the service processing flow.
    Type: Application
    Filed: December 27, 2023
    Publication date: April 18, 2024
    Inventors: Changming Wang, Fan Yang, Linna Zhang, Bingxi Lin, Changyu Guo, Fang Liu, Zisheng Liu, Tian Lan, Fabin Liu, Zhengzhe Zhang, Siyu Hou, Yao Wang
  • Publication number: 20240126725
    Abstract: Described herein is a system and method for providing an integrated function editor, for use with a data analytics environment. The function editor can be utilized to create and register functions available within a cloud infrastructure or cloud environment, for use within a data analytics environment. Such functions available for use within the cloud infrastructure or cloud environment can be displayed for the user, and used, for example, in data analytics workbooks, to create an interface or API that allows connection of the data analytics environment to a cloud infrastructure database.
    Type: Application
    Filed: March 2, 2023
    Publication date: April 18, 2024
    Inventors: LUIS RAMIREZ, MONISHA BALAJI, JORGE ZUNIGA, SHREYA SAWANT, RUTUJA JOSHI, KENNETH ENG
  • Publication number: 20240126726
    Abstract: JSON schemas are implemented efficiently within a DBMS. Through these techniques, the power and benefit of schema-based paradigm are realized in a more cost-effective manner in terms of computer system performance. JSON schema-based techniques described herein improve execution efficiency of database statements that access JSON objects and improve software development productivity.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: ZHEN HUA LIU, SRIKRISHNAN SURESH, BEDA CHRISTOPH HAMMERSCHMIDT, JOSHUA SPIEGEL, DOUGLAS JAMES MCMAHON
  • Publication number: 20240126727
    Abstract: JSON schemas are implemented efficiently within a DBMS. Through these techniques, the power and benefit of schema-based paradigm are realized in a more cost-effective manner in terms of computer system performance. JSON schema-based techniques described herein improve execution efficiency of database statements that access JSON objects and improve software development productivity.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: ZHEN HUA LIU, SRIKRISHNAN SURESH, BEDA CHRISTOPH HAMMERSCHMIDT, JOSHUA SPIEGEL, DOUGLAS JAMES MCMAHON
  • Publication number: 20240126728
    Abstract: JSON Duality Views are object views that return JDV objects. JDV objects are virtual because they are not stored in a database as JSON objects. Rather, JDV objects are stored in shredded form across tables and table attributes (e.g. columns) and returned by a DBMS in response to database commands that request a JDV object from a JSON Duality View. Through JSON Duality Views, changes to the state of a JDV object may be specified at the level of a JDV object. JDV objects are updated in a database using optimistic lock.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: ZHEN HUA LIU, JUAN R. LOAIZA, SUNDEEP ABRAHAM, SHUBHA BOSE, HUI JOE CHANG, SHASHANK GUGNANI, BEDA CHRISTOPH HAMMERSCHMIDT, TIRTHANKAR LAHIRI, YING LU, DOUGLAS JAMES MCMAHON, AUROSISH MISHRA, AJIT MYLAVARAPU, SUKHADA PENDSE, ANANTH RAGHAVAN
  • Publication number: 20240126729
    Abstract: JSON Duality Views are object views that return JDV objects. JDV objects are virtual because they are not stored in a database as JSON objects. Rather, JDV objects are stored in shredded form across tables and table attributes (e.g. columns) and returned by a DBMS in response to database commands that request a JDV object from a JSON Duality View. Through JSON Duality Views, changes to the state of a JDV object may be specified at the level of a JDV object. JDV objects are updated in a database using optimistic lock.
    Type: Application
    Filed: October 14, 2022
    Publication date: April 18, 2024
    Inventors: ZHEN HUA LIU, JUAN R. LOAIZA, SUNDEEP ABRAHAM, SHUBHA BOSE, HUI JOE CHANG, SHASHANK GUGNANI, BEDA CHRISTOPH HAMMERSCHMIDT, TIRTHANKAR LAHIRI, YING LU, DOUGLAS JAMES MCMAHON, AUROSISH MISHRA, AJIT MYLAVARAPU, SUKHADA PENDSE, ANANTH RAGHAVAN
  • Publication number: 20240126730
    Abstract: A system, method, and computer-readable medium for generating synthetic data are described. Improved data models for databases may be achieved by improving the quality of synthetic data upon for modeling those databases. According to some aspects, these and other benefits may be achieved by using numeric distribution information in a schema describing one or more numeric fields and, based on that schema, distribution-appropriate numerical data may be generated. The schema may be compared against actual data and the schema adjusted to more closely match the actual data. In implementation, this may be effected by storing a schema with distribution information and/or one or more parameters, generating synthetic numerical data based on the schema, and, based on a comparison with actual data, modify the schema until the synthetic data is statistically similar to the actual data. A benefit may include improved database performance and indexing based on repeatable, statistically appropriate, synthetic data.
    Type: Application
    Filed: October 23, 2023
    Publication date: April 18, 2024
    Inventor: Steven Lott
  • Publication number: 20240126731
    Abstract: One example described herein involves a system that can receive a set of data records and execute an automated entity resolution (AER) process configured to assign the set of data records to a set of entities. For each entity in the set of entities, the system can generate a respective consistency score for the entity, generate a respective confidence score for the entity based on the respective consistency score for the entity, and determine a respective visual indicator based on the respective confidence score for the entity. The respective visual indicator can indicate a risk of record misassignment to a user. The system can then generate a graphical user interface that includes the respective visual indicator for each of the entities.
    Type: Application
    Filed: June 22, 2023
    Publication date: April 18, 2024
    Applicant: SAS Institute Inc.
    Inventor: Nicholas Akbar Ablitt
  • Publication number: 20240126732
    Abstract: One example described herein involves a system that can receive a set of data records and execute an automated entity resolution (AER) process configured to assign the set of data records to a set of entities. For each entity in the set of entities, the system can generate a respective consistency score for the entity, generate a respective confidence score for the entity based on the respective consistency score for the entity, and determine a respective visual indicator based on the respective confidence score for the entity. The respective visual indicator can indicate a risk of record misassignment to a user. The system can then generate a graphical user interface that includes the respective visual indicator for each of the entities.
    Type: Application
    Filed: April 13, 2023
    Publication date: April 18, 2024
    Applicant: SAS Institute Inc.
    Inventor: Nicholas Ablitt
  • Publication number: 20240126733
    Abstract: One example method includes, in a data buffer that includes one or more words and whitespaces, calculating a hash value of data in a window that is movable within the data buffer, comparing the hash value to a mask, and when the hash value matches the mask, identifying a position of the window in the data buffer as a chunk anchor position, searching for a whitespace nearest the chunk anchor position, and designating an offset of the whitespace as a segment boundary.
    Type: Application
    Filed: December 21, 2023
    Publication date: April 18, 2024
    Inventor: Philip N. Shilane
  • Publication number: 20240126734
    Abstract: Methods and systems are configured to determine a semantic meaning for data and generate data processing rules based on the semantic meaning of the data. The semantic meaning includes syntactical or contextual meaning for the data that is determined, for example, by profiling, by the data processing system, values stored in a field included in data records of one or more datasets; applying, by the data processing system, one or more classifiers to the profiled values; identifying, based on applying the one or more classifiers, one or more attributes indicative of a logical or syntactical characteristic for the values of the field, with each of the one or more attributes having a respective confidence level that is based on an output of each of the one or more classifiers. The attributes are associated with the fields and are used for generating data processing rules and processing the data.
    Type: Application
    Filed: December 28, 2023
    Publication date: April 18, 2024
    Inventors: John Joyce, Marshall A. Isman, Sandrick Melbouci