Adaptive out of order arbitration for numerous virtual queues
Granted: October 1, 2024
Patent Number:
12105646
A system includes a memory implementing one or more virtual queues and a processor coupled to the memory. In response to issuing one or more requests for data, a processor maps one or more of the requests for data to a return queue structure. The processor then allocates one or more virtual queues to the return queue structure based on the mapped requests. In response to allocating the virtual queues to the return queue, the processor writes the data indicated in the mapped requests to…
Secure testing mode
Granted: October 1, 2024
Patent Number:
12105139
A technique for operating a processing device is disclosed. The method includes irreversibly activating a testing mode switch of the processing device; in response to the activating, entering a testing mode in which normal operation of the processing device is disabled; receiving software for the processing device in the testing mode; based on whether the software is verified as testing mode-signed software, executing or not executing the software.
Combination BIOS with A/B recovery
Granted: September 24, 2024
Patent Number:
12099609
A computing system may implement a basic input/output system (BIOS) update method. The BIOS also includes identifying an installed central processing unit (CPU) of a computer system coupled to a BIOS chipset, selecting CPU firmware corresponding to the installed CPU from a plurality of CPU platform firmware stored on a first memory, and loading the CPU firmware into a shared portion of a second memory coupled to the BIOS chipset, where the shared portion of the second memory is…
Noise mitigation in single-ended links
Granted: September 24, 2024
Patent Number:
12101135
An integrated circuit includes a first terminal for receiving a data signal, a second terminal for receiving an external reference voltage, a receiver, and a reference voltage generation circuit. The receiver is powered by a power supply voltage with respect to ground and has a first input coupled to the first terminal, a second input for receiving a shared reference voltage, and an output for providing a data input signal. The reference voltage generation circuit is coupled to the…
Low congestion standard cells
Granted: September 24, 2024
Patent Number:
12100660
A system and method for creating layout for standard cells are described. In various implementations, a semiconductor fabrication process (or process) forms a power signal route in a same metal zero track reserved for power rails. The process forms a contact layer with inserted spacing underneath the power signal route. Along the track, this contact layer has physical contact with the power signal route with a first distance greater than a width of any signal route in any metal layer…
Repairable latch array
Granted: September 24, 2024
Patent Number:
12100464
An integrated circuit includes a latch array including a plurality of latches logically configured in rows and columns, a plurality of repair latches operatively coupled to the plurality of latches and latch array built in self-test and repair logic (LABISTRL) coupled to the plurality of latches. In some implementations the LABISTRL configures latches in the array as one or more column serial test shift register, detects one or more defective latches of the plurality of latches based on…
Multi-kernel wavefront scheduler
Granted: September 24, 2024
Patent Number:
12099867
Systems, apparatuses, and methods for implementing a multi-kernel wavefront scheduler are disclosed. A system includes at least a parallel processor coupled to one or more memories, wherein the parallel processor includes a command processor and a plurality of compute units. The command processor launches multiple kernels for execution on the compute units. Each compute unit includes a multi-level scheduler for scheduling wavefronts from multiple kernels for execution on its execution…
Address mapping-aware tasking mechanism
Granted: September 24, 2024
Patent Number:
12099866
An Address Mapping-Aware Tasking (AMAT) mechanism manages compute task data and issues compute tasks on behalf of threads that created the compute task data. The AMAT mechanism stores compute task data generated by host threads in a set of partitions, where each partition is designated for a particular memory module. The AMAT mechanism maintains address mapping data that maps address information to partitions. Threads push compute task data to the AMAT mechanism instead of generating and…
FPGA-based programmable data analysis and compression front end for GPU
Granted: September 24, 2024
Patent Number:
12099789
Methods, devices, and systems for information communication. Information transmitted from a host to a graphics processing unit (GPU) is received by information analysis circuitry of a field-programmable gate array (FPGA). A pattern in the information is determined by the information analysis circuitry. A predicted information pattern is determined, by the information analysis circuitry, based on the information. An indication of the predicted information pattern is transmitted to the…
Tag and data configuration for fine-grained cache memory
Granted: September 24, 2024
Patent Number:
12099723
A method for operating a memory having a plurality of banks accessible in parallel, each bank including a plurality of grains accessible in parallel is provided. The method includes: based on a memory access request that specifies a memory address, identifying a set that stores data for the memory access request, wherein the set is spread across multiple grains of the plurality of grains; and performing operations to satisfy the memory access request, using entries of the set stored…
Re-reference interval prediction (RRIP) with pseudo-LRU supplemental age information
Granted: September 24, 2024
Patent Number:
12099451
Systems and methods for cache replacement are disclosed. Techniques are described that determine a re-reference interval prediction (RRIP) value of respective data blocks in a cache, where an RRIP value represents a likelihood that a respective data block will be re-used within a time interval. Upon an access, by a processor, to a data segment in a memory, if the data segment is not stored in the cache, a data block in the cache to be replaced by the data segment is selected, utilizing a…
Cost-saving scheme for scan testing of 3D stack die
Granted: September 24, 2024
Patent Number:
12099091
A system and method for efficiently routing scan data between two dies used in three-dimensional packaging are described. In various implementations, a computing system includes at least a first semiconductor die (or first die) and a second die connected to one another within a three-dimensional (3D) package. The first die and the second die have multiple non-scan input/output (I/O) data channels between them for data transfer. The non-scan I/O data channels are partitioned into groups.…
Semiconductor chip with redundant thru-silicon-vias
Granted: September 17, 2024
Patent Number:
12094853
A semiconductor chip with conductive vias and a method of manufacturing the same are disclosed. The method includes forming a first plurality of conductive vias in a layer of a first semiconductor chip The first plurality of conductive vias includes first ends and second ends. A first conductor pad is formed in ohmic contact with the first ends of the first plurality of conductive vias.
Shared data fabric processing client reset system and method
Granted: September 17, 2024
Patent Number:
12093689
A processing system that includes a shared data fabric resets a first client processor while operating a second client processor. The first client processor is instructed to stop making requests to one or more devices of the shared data fabric. Status communications are blocked between the first client processor and a memory controller, the second client processor, or both, such that the first client processor enters a temporary offline state. The first client processor is indicated as…
Allocation control for cache
Granted: September 17, 2024
Patent Number:
12093181
A technique for operating a cache is disclosed. The technique includes based on a workload change, identifying a first allocation permissions policy; operating the cache according to the first allocation permissions policy; based on set sampling, identifying a second allocation permissions policy; and operating the cache according to the second allocation permissions policy.
Multi-level signal reception
Granted: September 17, 2024
Patent Number:
12093124
A method for receiving a multi-level error signal having more than two logic levels includes oversampling the multi-level error signal to provide sampled symbols, wherein a first level of the multi-level error signal indicates no error, and second and third levels of the multi-level error signal indicate first and second error conditions, respectively. The sampled signals are de-serialized to provide sets of symbols. A start of a symbol period is determined in response to detecting that…
Using a hardware-based controller for power state management
Granted: September 10, 2024
Patent Number:
12086009
Methods and systems are disclosed for transitioning, by a hardware-based controller, a system on a chip (SoC) into different power states. Techniques disclosed include tracking, by the controller, metrics associated with the SoC and transitioning, by the controller, the SoC from a first power state to a second power state based on the tracked metrics. Were the total amount of power that is used by at least a portion of the transition between the first power state to the second power…
Graphics processing unit with selective two-level binning
Granted: September 10, 2024
Patent Number:
12086899
Systems and methods related to run-time selection of a render mode in which to execute command buffers with a graphics processing unit (GPU) of a device based on performance data corresponding to the device are provided. A user mode driver (UMD) or kernel mode driver (KMD) executed at a central processing unit (CPU) selects abinning mode based on whether performance data that includes sensor data or performance counter data indicates that an associated binning condition or override…
Efficient memory-semantic networking using scoped memory models
Granted: September 10, 2024
Patent Number:
12086422
A framework disclosed herein extends a relaxed, scoped memory model to a system that includes nodes across a commodity network and maintains coherency across the system. A new scope, cluster scope, is defined, that allows for memory accesses at scopes less than cluster scope to operate on locally cached versions of remote data from across the commodity network without having to issue expensive network operations. Cluster scope operations generate network commands that are used to…
Memory sprinting
Granted: September 10, 2024
Patent Number:
12086418
A memory sprint controller, responsive to an indicator of an irregular memory access phase, causes a memory controller to enter a sprint mode in which it temporarily adjusts at least one timing parameter of a dynamic random access memory (DRAM) to reduce a time in which a designated number of activate (ACT) commands are allowed to be dispatched to the DRAM.