AMD Patent Applications

Leveraging Processing in Memory Registers as Victim Buffers

Granted: June 27, 2024
Application Number: 20240211393
In accordance with the described techniques for leveraging processing in memory registers as victim buffers, a computing device includes a memory, a processing in memory component having registers for data storage, and a memory controller having a victim address table that includes at least one address of a row of the memory that is stored in the registers. The memory controller receives a request to access the row of the memory and accesses data of the row from the registers based on…

DEVICES, SYSTEMS, AND METHODS FOR INJECTING FABRICATED ERRORS INTO MACHINE CHECK ARCHITECTURES

Granted: June 27, 2024
Application Number: 20240211362
An exemplary system includes and/or represents an agent and a machine check architecture. In one example, the machine check architecture includes and/or represents at least one circuit configured to report errors via at least one reporting register. In this example, the machine check architecture also includes and/or represents at least one error-injection register configured to cause the circuit to inject at least one fabricated error report into the reporting register in response to a…

PERFORMANCE OF BANK REFRESH

Granted: June 27, 2024
Application Number: 20240211173
A memory controller includes an arbiter. The arbiter is configured to elevate a priority of memory access requests that generate row activate commands in response to receiving a same-bank refresh request, and to send a same-bank refresh command in response to receiving the same-bank refresh request.

System Memory Training with Chipset Attached Memory

Granted: June 27, 2024
Application Number: 20240211160
System memory training with chipset attached memory is described. In accordance with the described techniques, a request is received to train a system memory of a device. Responsive to the request, contents of the system memory are transferred to a chipset attached memory. The device is operated using the contents from the chipset attached memory. While the device is being operated using the contents from the chipset attached memory, the system memory is dynamically trained. After the…

Extended Training for Memory

Granted: June 27, 2024
Application Number: 20240211142
Extended training for memory is described. In accordance with the described techniques, a training request to train a memory with extended training is received. The extended training corresponds to a longer amount of time than a default training. The extended training of the memory is performed using a set of target memory settings. In one or more implementations, the extended training is performed during a boot up phase of the computing device.

ACCELERATING RELAXED REMOTE ATOMICS ON MULTIPLE WRITER OPERATIONS

Granted: June 27, 2024
Application Number: 20240211134
A memory controller includes an arbiter, a vector arithmetic logic unit (VALU), a read buffer and a write buffer both coupled to the VALU, and an atomic memory operation scheduler. The VALU performs scattered atomic memory operations on arrays of data elements responsive to selected memory access commands. The atomic memory operation scheduler is for scheduling atomic memory operations at the VALU; identifying a plurality of scattered atomic memory operations with commutative and…

BOUNDING VOLUME HIERARCHY LEAF NODE COMPRESSION

Granted: June 20, 2024
Application Number: 20240203032
A technique for performing ray tracing operations is provided. The technique includes identifying triangles to include in a compressed triangle block; storing data common to the identified triangles as common data of the compressed triangle block; and storing data unique to the identified triangles as unique data of the compressed triangle block.

NETWORK COLLECTIVE OFFLOAD MESSAGE CHUNKING MANAGEMENT

Granted: June 20, 2024
Application Number: 20240205133
The disclosed device can perform a collective operation on received datasets, and split the result into chunks in accordance with a chunking scheme. The device can also forward the chunks in accordance with a routing scheme that can direct chunks to appropriate nodes of a collective network. Various other methods, systems, and computer-readable media are also disclosed.

NETWORK COLLECTIVE OFFLOADING COST MANAGEMENT

Granted: June 20, 2024
Application Number: 20240205093
The disclosed device includes a collective engine that can select a communication cost model from multiple communication cost models for a collective operation and configure a topology of a collective network for performing the collective operation using the selected communication cost model. Various other methods, systems, and computer-readable media are also disclosed.

NETWORK COLLECTIVE OFFLOADING ROUTING MANAGEMENT

Granted: June 20, 2024
Application Number: 20240205092
The disclosed device includes a collective engine that can receive state information from nodes of a collective network. The collective engine can use the state information to initialize a topology of appropriate data routes between the nodes for the collective operation. Various other methods, systems, and computer-readable media are also disclosed.

DYNAMIC CONFIGURATION OF PROCESSOR SUB-COMPONENTS

Granted: June 20, 2024
Application Number: 20240201777
The disclosed method includes observing a utilization of a target sub-component of a functional unit of a processor using a control circuit coupled to the target sub-component. The method also includes detecting that the utilization is outside a desired utilization range and throttling one or more sub-components of the functional unit to reduce a power consumption of the functional unit. Various other methods, systems, and computer-readable media are also disclosed.

GRAPHICS AND COMPUTE API EXTENSION FOR CACHE AUTO TILING

Granted: June 20, 2024
Application Number: 20240202862
A processing device and a method of auto-tiled workload processing is provided. The processing device includes memory and a processor. The processor is configured to store instructions for operations to be executed on an image to be divided into a plurality of tiles, store information associated with the operations, select one of the operations for execution and execute an auto-tiling plan for the operation based on the information associated with the operations. The auto-tiling plan…

COHERENT BLOCK READ FULFILLMENT

Granted: June 20, 2024
Application Number: 20240202144
A coherent memory fabric includes a plurality of coherent master controllers and a coherent slave controller. The plurality of coherent master controllers each include a response data buffer. The coherent slave controller is coupled to the plurality of coherent master controllers. The coherent slave controller, responsive to determining a selected coherent block read command is guaranteed to have only one data response, sends a target request globally ordered message to the selected…

Programmable Data Storage Memory Hierarchy

Granted: June 20, 2024
Application Number: 20240202121
Programmable data storage memory hierarchy techniques are described. In one example, a data storage system includes a memory hierarchy and a data movement controller. The memory hierarchy includes a hierarchical arrangement of a plurality of memory buffers. The data movement controller is configured to receive a data movement command and control data movement between the plurality of memory buffers based on the data movement command.

Method and Apparatus for Increasing Memory Level Parallelism by Reducing Miss Status Holding Register Allocation in Caches

Granted: June 20, 2024
Application Number: 20240202116
An entry of a last level cache shadow tag array to track pending last level cache misses to private data in a previous level cache (e.g., an L2 cache), that also are misses to an exclusive last level cache (e.g., an L3 cache) and to the last level cache shadow tag array. Accordingly, last level cache miss status holding registers need not be expended to track cache misses to private data that are already being tracked by a previous level cache miss status holding register. Additionally…

SYSTEMS AND METHODS FOR CHIPLET SYNCHRONIZATION

Granted: June 20, 2024
Application Number: 20240202047
The disclosed computer-implemented method can include reaching, by a chiplet involved in carrying out an operation for a process, a synchronization barrier. The method can additionally include receiving, by the chiplet, dedicated control messages pushed to the chiplet by other chiplets involved in carrying out the operation for the process, wherein the dedicated control messages are pushed over a control network by the other chiplets. The method can also include advancing, by the…

Data Evaluation Using Processing-in-Memory

Granted: June 20, 2024
Application Number: 20240201993
Data evaluation using processing-in-memory is described. In accordance with the described techniques, data evaluation logic is loaded into a processing-in-memory component. The processing-in-memory component executes the data evaluation logic to evaluate a minimum number of bits required to retrieve data from, or store data to, at least one memory location. A result is output indicating the number of bits required to represent data at the at least one memory location based on the…

Fused Data Generation and Associated Communication

Granted: June 20, 2024
Application Number: 20240201990
Fused data generation and associated communication techniques are described. In an implementation, a system includes processing system having a plurality of processors. A data generation and communication tracking module is configured to track programmatically defined data generation and associated communication as performed by the plurality of processors. A targeted communication module is configured to trigger targeted communication of data between the plurality of processors based on…

COMBINED SPARSE AND BLOCK FLOATING ARITHMETIC

Granted: June 20, 2024
Application Number: 20240201948
A processing device for encoding floating point numbers comprising memory configured to store data comprising the floating point numbers and circuitry. The circuitry is configured to, for a set of the floating point numbers, identify which of the floating point numbers represent a zero value and which of the floating point numbers represent a non-zero value, convert the floating point numbers which represent a non-zero value into a block floating point format value and generate an…

METHOD AND APPARATUS FOR MANAGING MEMORY

Granted: June 20, 2024
Application Number: 20240201876
A method and apparatus of managing memory includes storing a first memory page at a shared memory location in response to the first memory page including data shared between a first virtual machine and a second virtual machine. A second memory page is stored at a memory location unique to the first virtual machine in response to the second memory page including data unique to the first virtual machine. The first memory page is accessed by the first virtual machine and the second virtual…