AMD Patent Applications

ERROR PIN TRAINING WITH GRAPHICS DDR MEMORY

Granted: May 11, 2023
Application Number: 20230146703
A receiver is trained for receiving a signal over a data bus. A volatile memory is commanded over the data bus to place a selected pulse-amplitude modulation (PAM) driver in a mode with a designated steady output level. At a receiver circuit coupled to the selected PAM driver, a respective reference voltage associated with the designated steady output level is swept through a range of voltages and the respective reference voltage is compared to a voltage received from the PAM driver to…

SECURE TESTING MODE

Granted: May 11, 2023
Application Number: 20230146154
A technique for operating a processing device is disclosed. The method includes irreversibly activating a testing mode switch of the processing device; in response to the activating, entering a testing mode in which normal operation of the processing device is disabled; receiving software for the processing device in the testing mode; based on whether the software is verified as testing mode-signed software, executing or not executing the software.

FLEXIBLE CIRCUIT FOR DROOP DETECTION

Granted: May 11, 2023
Application Number: 20230145626
A power supply monitor includes a delta-sigma modulator including an input receiving a binary number and an output providing a pulse-density modulated signal, the delta-sigma modulator operable to scale the pulse-density modulated signal based on the binary number. A fast droop detector circuit includes a level shifter providing the modulated signal referenced to a clean supply voltage. A lowpass filter is coupled between the level shifter and a comparator. The comparator produces a…

REDUCING LATENCY IN HIGHLY SCALABLE HPC APPLICATIONS VIA ACCELERATOR-RESIDENT RUNTIME MANAGEMENT

Granted: May 11, 2023
Application Number: 20230145253
Methods and systems for runtime management by an accelerator-resident manager. Techniques include receiving, by the manager, a representation of a processing flow of an application, including a plurality of kernels and respective dependencies. The manager, then, assigns the plurality of kernels to one or more APUs managed it and launches the plurality of kernels on their assigned APUs to run in an iteration according to the respective dependencies.

PERFORMANCE MANAGEMENT DURING POWER SUPPLY VOLTAGE DROOP

Granted: May 11, 2023
Application Number: 20230144770
A method for controlling a data processing system includes detecting a droop in a power supply voltage of a functional circuit of the data processing system greater than a programmable droop threshold. An operation of the data processing system is throttled according to a programmable step size, a programmable assertion time, and a programmable de-assertion time in response to detecting the droop.

GAMING SUPER RESOLUTION

Granted: May 4, 2023
Application Number: 20230140100
A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate at least one linear down-sampled version of the input image via a linear upscaling network, generate at least one non-linear down-sampled version of the input image via a non-linear upscaling network, extract a first feature map from the at least one linear down-sampled version of the input image, and extract a second feature map…

GAMING SUPER RESOLUTION

Granted: May 4, 2023
Application Number: 20230140100
A processing device is provided which includes memory and a processor. The processor is configured to receive an input image having a first resolution, generate at least one linear down-sampled version of the input image via a linear upscaling network, generate at least one non-linear down-sampled version of the input image via a non-linear upscaling network, extract a first feature map from the at least one linear down-sampled version of the input image, and extract a second feature map…

CACHE LINE COHERENCE STATE DOWNGRADE

Granted: May 4, 2023
Application Number: 20230138518
Techniques for performing cache operations are provided. The techniques include for a memory access class, detecting a threshold number of instances in which cache lines in an exclusive state in a cache are changed to an invalid state or a shared state without being in a modified state; in response to the detecting, treating first coherence state agnostic requests for cache lines for the memory access class as requests for cache lines in a shared state; detecting a reset event for the…

HISTORY-BASED SELECTIVE CACHE LINE INVALIDATION REQUESTS

Granted: May 4, 2023
Application Number: 20230137467
Techniques for performing cache operations are provided. The techniques include recording an indication that providing exclusive access of a first cache line to a first processor is deemed problematic; detecting speculative execution of a store instruction by the first processor to the first cache line; and in response to the detecting, refusing to provide exclusive access of the first cache line to the first processor, based on the indication.

CACHE LINE COHERENCE STATE UPGRADE

Granted: May 4, 2023
Application Number: 20230136114
Techniques for performing cache operations are provided. The techniques include, recording an entry indicating that a cache line is exclusive-upgradeable; removing the cache line from a cache; and converting a request to insert the cache line into the cache into a request to insert the cache line in the cache in an exclusive state.

CACHE LINE COHERENCE STATE DOWNGRADE

Granted: May 4, 2023
Application Number: 20230138518
Techniques for performing cache operations are provided. The techniques include for a memory access class, detecting a threshold number of instances in which cache lines in an exclusive state in a cache are changed to an invalid state or a shared state without being in a modified state; in response to the detecting, treating first coherence state agnostic requests for cache lines for the memory access class as requests for cache lines in a shared state; detecting a reset event for the…

HISTORY-BASED SELECTIVE CACHE LINE INVALIDATION REQUESTS

Granted: May 4, 2023
Application Number: 20230137467
Techniques for performing cache operations are provided. The techniques include recording an indication that providing exclusive access of a first cache line to a first processor is deemed problematic; detecting speculative execution of a store instruction by the first processor to the first cache line; and in response to the detecting, refusing to provide exclusive access of the first cache line to the first processor, based on the indication.

CACHE LINE COHERENCE STATE UPGRADE

Granted: May 4, 2023
Application Number: 20230136114
Techniques for performing cache operations are provided. The techniques include, recording an entry indicating that a cache line is exclusive-upgradeable; removing the cache line from a cache; and converting a request to insert the cache line into the cache into a request to insert the cache line in the cache in an exclusive state.

DYNAMIC RANDOM-ACCESS MEMORY (DRAM) TRAINING ACCELERATION

Granted: April 27, 2023
Application Number: 20230132306
A method for performing read training of a memory channel includes writing a data pattern to a memory using a data bus having a predetermined number of bit lanes. An edge of a read data eye is determined individually for each bit lane by reading the data pattern over the data bus using a read bust cycle having a predetermined length, grouping data received on each bit lane over the read burst cycle to form a bit lane data group, and comparing the bit lane data group to corresponding…

ERROR RECOVERY FOR NON-VOLATILE MEMORY MODULES

Granted: April 27, 2023
Application Number: 20230125792
A memory controller includes a command queue, a memory interface queue, at least one storage queue, and a replay control circuit. The command queue has a first input for receiving memory access commands. The memory interface queue receives commands selected from the command queue and couples to a heterogeneous memory channel which is coupled to at least one non-volatile storage class memory (SCM) module. The at least one storage queue stores memory access commands that are placed in the…

MULTI-ADAPTIVE CACHE REPLACEMENT POLICY

Granted: April 6, 2023
Application Number: 20230109344
Techniques for performing cache operations are provided. The techniques include tracking performance events for a plurality of test sets of a cache, detecting a replacement policy change trigger event associated with a test set of the plurality of test sets, and in response to the replacement policy change trigger event, operating non-test sets of the cache according to a replacement policy associated with the test set.

CACHE MISS PREDICTOR

Granted: April 6, 2023
Application Number: 20230108964
Methods, devices, and systems for retrieving information based on cache miss prediction. A prediction that a cache lookup for the information will miss a cache is made based on a history table. The cache lookup for the information is performed based on the request. A main memory fetch for the information is begun before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations, the prediction includes…

CACHE ALLOCATION POLICY

Granted: April 6, 2023
Application Number: 20230105709
A cache includes an upstream port, a downstream port, a cache memory, and a control circuit. The control circuit temporarily stores memory access requests received from the upstream port, and checks for dependencies for a new memory access request with older memory access requests temporarily stored therein. If one of the older memory access requests creates a false dependency with the new memory access request, the control circuit drops an allocation of a cache line to the cache memory…

ACCELERATION STRUCTURES WITH DELTA INSTANCES

Granted: March 30, 2023
Application Number: 20230097562
Described herein is a technique for performing ray tracing operations. The technique includes encountering, at a non-leaf node, a pointer to a bottom-level acceleration structure having one or more delta instances; identifying an index associated with the pointer, wherein the index identifies an instance within the bottom-level acceleration structure; and obtaining data for the instance based on the pointer and the index.

STORING AN INDICATION OF A SPECIFIC DATA PATTERN IN SPARE DIRECTORY ENTRIES

Granted: March 30, 2023
Application Number: 20230099256
A system and method for omission of probes when requesting data stored in memory where the omission includes creating a coherence directory entry, determining whether cache line data for the coherence directory entry is a trackable pattern, and setting an indication indicating that one or more reads for the cache line data can be serviced without sending probes. A system and method for providing extra data storage capacity in a coherence directory where the extra data storage capacity…