Kuang Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
Abstract: A computer-implemented method for allocating memory bandwidth of multiple CPU cores in a server includes: receiving an access request to a last level cache (LLC) shared by the multiple CPU cores in the server, the access request being sent from a core with a private cache holding copies of frequently accessed data from a memory; determining whether the access request is an LLC hit or an LLC miss; and controlling a memory bandwidth controller based on the determination. The memory bandwidth controller performs a memory bandwidth throttling to control a request rate between the private cache and the last level cache. The LLC hit of the access request causes the memory bandwidth throttling initiated by the memory bandwidth controller to be disabled and the LLC miss of the access request causes the memory bandwidth throttling initiated by the memory bandwidth controller to be enabled.
Abstract: Exemplary methods of patterning a device layer are described, including operations of patterning a protector layer and forming a first opening in a first patterning layer to expose a first portion of the protector layer and a first portion of the hard mask layer, which are then are exposed to a first etch to form a first opening in the first portion of the hard mask layer. A second opening is formed in a second patterning layer to expose a second portion of the protector layer and a second portion of the hard mask layer. The second portion of the protector layer and the second portion of the hard mask layer are exposed to an etch to form a second opening in the second portion of the hard mask layer. Exposed portions of the device layer are then etched through the first opening and the second opening.
Abstract: The present application discloses a computing system and an associated method. The computing system includes a memory, a master computing device and a slave computing device. The master computing device includes a memory controller and an input-output memory management unit (IOMMU). When the slave computing device accesses a first virtual address, and a first translation lookaside buffer (TLB) of the slave computing device does not store the first virtual address, the first TLB sends a translation request to the IOMMU. The IOMMU traverses page tables of the memory controller to obtain a first physical address corresponding to the first virtual address, selects and clears a first virtual address entry from a second TLB of the computing system according to a recent use time and a dependent workload of each virtual address entry to store the first virtual address and the first physical address.
Abstract: Core-aware caching systems and methods for non-inclusive non-exclusive shared caching based on core sharing behaviors of the data and/or instructions. In one implementation, the caching between a shared cache level and a core specific cache level can be based on physical page number (PPN) and core identifier sets for previous accesses to the respective physical page numbers. In another implementation, the caching between a shared cache level and a core specific cache level can be based on physical page number and core valid bit vector sets for previous accesses to the respective physical page numbers by each of the plurality of cores.
Abstract: A method includes forming a material layer over a substrate, forming a first hard mask (HM) layer over the material layer, forming a first trench, along a first direction, in the first HM layer. The method also includes forming first spacers along sidewalls of the first trench, forming a second trench in the first HM layer parallel to the first trench, by using the first spacers to guard the first trench. The method also includes etching the material layer through the first trench and the second trench, removing the first HM layer and the first spacers, forming a second HM layer over the material layer, forming a third trench in the second HM layer. The third trench extends along a second direction that is perpendicular to the first direction and overlaps with the first trench. The method also includes etching the material layer through the third trench.
Abstract: A mechanism is described for facilitating portable, reusable, and shareable Internet of Things-based services and resources according to one embodiment. A method of embodiments, described herein, includes selecting a recipe, wherein the recipe includes instructions associated with a first trigger to activate a first set of resources, the first set of resources and instructions correspond to a first category, accessing a second set of resources associated with a second trigger to activate the second set of resources, the second set of resources and instructions correspond to a second category of user preferences, the second category different from the first category, modify the recipe to replace a first resource block of the first set of resources with a second resource block of the second set of resources, and deploy the recipe at a computing device, the computing device receiving at least one of the first trigger or the second trigger.
July 28, 2023
February 1, 2024
Shao-Wen Yang, Nyuk Kin Koo, Yen-Kuang Chen
Abstract: In one embodiment, an apparatus comprises a memory and a processor. The memory is to store sensor data captured by one or more sensors associated with a first device. Further, the processor comprises circuitry to: access the sensor data captured by the one or more sensors associated with the first device; determine that an incident occurred within a vicinity of the first device; identify a first collection of sensor data associated with the incident, wherein the first collection of sensor data is identified from the sensor data captured by the one or more sensors; preserve, on the memory, the first collection of sensor data associated with the incident; and notify one or more second devices of the incident, wherein the one or more second devices are located within the vicinity of the first device.
October 8, 2021
Date of Patent:
January 30, 2024
Shao-Wen Yang, Eve M. Schooler, Maruti Gupta Hyde, Hassnaa Moustafa, Katalin Klara Bartfai-Walcott, Yen-Kuang Chen, Jessica McCarthy, Christina R. Strong, Arun Raghunath, Deepak S. Vembar
Abstract: A signal re-driving device, a data storage system and a mode control method are provided. The method includes the following steps. A first signal is received via a receiving circuit of the signal re-driving device. An analog signal feature is detected the receiving circuit. A first mode is entered according to the analog signal feature. The first signal is modulated and a second signal is outputted in the first mode. The second signal is sent via a sending circuit of the signal re-driving device. A digital signal feature is detected via the receiving circuit. And, the first mode is switched to a second mode according to the digital signal feature.
Abstract: The present disclosure discloses a batch computing system and an associated method. The batch computing system includes a memory, a task manager and an inference computer. The memory stores a shared model parameter set common to a plurality of tasks that is generated by fine tuning a shared model and a task-specific parameter set of each task. The inference computer receives a plurality of task requests, derives a data length and a designated task of each task request, and enables the task manager to read a task-specific parameter set and a shared model parameter set corresponding to each task request. The inference computer further assigns task requests corresponding to the shared model to a plurality of computation batches, performs, in batch, the common computation of designated tasks in each batch computation, and performs task-specific computation operations of the designated tasks of each batch computation.
Abstract: The present application discloses a processor and a data operation method thereof. The processor includes: a first set of registers, a second set of registers and an execution unit. The execution unit is electrically connected to the first set of registers and the second set of registers, and configured to: convert data stored in the first set of registers from a first bit unit to a second bit unit, wherein the number of bits of the first bit unit is less than the number of bits of the second bit unit and the number of bits of the second bit unit is not an integer multiple of the number of bits of the first bit unit; and store the data in the second set of registers, wherein the number of registers of the second set of registers is greater than the number of registers of the first set of registers.
Abstract: A character recognition method includes the stages as detailed in the following paragraph. An image is received, wherein the image is one in a plurality of consecutive images. A target object in the image is detected. Object information of the target object is defined according to the area ratio of the target object occupied in the image. Whether the target object in the image is the same as the target object in the previous image is determined according to the object information. Character recognition on the target object is performed to obtain a recognition result. The weighting score of the recognition result is calculated according to the object information and the recognition result. The weighting score of the recognition result of the target object in the consecutive images is accumulated until the weighting score is higher than a preset value, and the recognition result is output.
Abstract: A class-D amplifying system includes: a first digital-to-analog converter (DAC), a class-D amplifier circuit and a second DAC. The first DAC generates an analog input signal according to a digital input signal. The class-D amplifier circuit generates an output signal according to the analog input signal in a pulse width modulation (PWM) manner. The second DAC generates a common mode (CM) adjustment current for adjusting a CM voltage of the analog input signal according to one or more of the following parameters: (1) the CM voltage of the analog input signal; and/or (2) a driving power. A power stage circuit of the class-D amplifier circuit is powered by the driving power. The second DAC determines which parameter the CM adjustment current is correlated to according to: (A) A level state of the output signal; and/or (B) A level state of a PWM signal of the class-D amplifier circuit.
Abstract: Methods, apparatuses, and embodiments related to analyzing the content of digital images. A computer extracts multiple sets of visual features, which can be keypoints, based on an image of a selected object. Each of the multiple sets of visual features is extracted by a different visual feature extractor. The computer further extracts a visual word count vector based on the image of the selected object. An image query is executed based on the extracted visual features and the extracted visual word count vector to identify one or more candidate template objects of which the selected object may be an instance. When multiple candidate template objects are identified, a matching algorithm compares the selected object with the candidate template objects to determine a particular candidate template of which the selected object is an instance.
Abstract: A method is provided, including the following operations: arranging a first gate structure extending continuously above a first active region and a second active region of a substrate; arranging a first separation spacer disposed on the first gate structure to isolate an electronic signal transmitted through a first gate via and a second gate via that are disposed on the first gate structure, wherein the first gate via and the second gate via are arranged above the first active region and the second active region respectively; and arranging a first local interconnect between the first active region and the second active region, wherein the first local interconnect is electrically coupled to a first contact disposed on the first active region and a second contact disposed on the second active region.
Abstract: An optical fiber surface electric field screening apparatus, comprising an extractor, the extractor being provided with an optical fiber, the optical fiber being prepared so as to form in the middle thereof a thinned part having a diameter of 15 ?m or less, causing a surface of the thinned part to generate a negative electric charge, capable of being used to attract a cell having a surface carrying a positive electric charge, e.g., attracting a cardiac cell, an endothelial cell or a bacterium to surround the thinned part, achieving the effect of screening and extracting cells.
Abstract: A capacitor string structure, a memory device and a charge pump circuit thereof are provided. The capacitor string structure includes a plurality of conductive plates. The conductive plates are disposed in the memory device. The conductive plates are stacked to each other, and respectively form a plurality of word lines of the memory device, where two neighbored conductive plates form a capacitor.
Abstract: A number of domain specific accelerators (DSA1-DSAn) are integrated into a conventional processing system (100) to operate on the same chip by adding additional instructions to a conventional instruction set architecture (ISA), and further adding an accelerator interface unit (130) to the processing system (100) to respond to the additional instructions and interact with the DSAs.