AMD Patent Applications

METHOD AND SYSTEM FOR DISTRIBUTING KEYS

Granted: March 28, 2024
Application Number: 20240106813
A method and system for distributing keys in a key distribution system includes receiving a connection for communication from a first component. A determination is made whether the first component requires a key be generated and distributed. Based upon a security mode for the communication, the key generated and distributed to the first component.

Filtered Responses of Memory Operation Messages

Granted: March 28, 2024
Application Number: 20240106782
In accordance with described techniques for filtered responses to memory operation messages, a computing system or computing device includes a memory system that receives messages. A filter component in the memory system receives the responses to the memory operation messages, and filters one or more of the responses based on a filterable condition. A tracking logic component tracks the one or more responses as filtered responses for communication completion.

DROOP DETECTION AND CONTROL OF DIGITAL FREQUENCY-LOCKED LOOP

Granted: March 28, 2024
Application Number: 20240106438
An integrated circuit includes a power supply monitor, a clock generator, and a divider. The power supply monitor is operable to provide a trigger signal in response to a power supply voltage dropping below a threshold voltage. The clock generator is operable to provide a first clock signal having a frequency dependent on a value of a frequency control word, and to change the frequency of the first clock signal over time using a native slope in response to a change in the frequency…

DEVICE AND METHOD OF IMPLEMENTING SUBPASS INTERLEAVING OF TILED IMAGE RENDERING

Granted: March 28, 2024
Application Number: 20240104685
Devices and methods method of tiled rendering are provided which comprises dividing a frame to be rendered, into a plurality of tiles, receiving commands to execute a plurality of subpasses of the tiles, interleaving execution of same subpasses of multiple tiles of the frame by executing one or more subpasses as skip operations, storing visibility data, for subsequently ordered subpasses of the tiles, at memory addresses allocated for data of corresponding adjacent tiles in a first…

Block Data Load with Transpose into Memory

Granted: March 28, 2024
Application Number: 20240103879
Block data load with transpose techniques are described. In one example, an input is received, at a control unit, specifying an instruction to load a block of data to at least one memory module using a transpose operation. Responsive to the receiving the input by the control unit, the block of data is caused to be loaded to the at least one memory module by transposing the block of data to form a transposed block of data and storing the transposed block of data in the at least one…

Bank-Level Parallelism for Processing in Memory

Granted: March 28, 2024
Application Number: 20240103763
In accordance with the described techniques for bank-level parallelism for processing in memory, a plurality of commands are received for execution by a processing in memory component embedded in a memory. The memory includes a first bank and a second bank. The plurality of commands include a first stream of commands which cause the processing in memory component to perform operations that access the first bank and a second stream of commands which cause the processing in memory…

Scheduling Processing-in-Memory Requests and Memory Requests

Granted: March 28, 2024
Application Number: 20240103745
A memory controller coupled to a memory module receives both processing-in-memory (PIM) requests and memory requests from a host (e.g., a host processor). The memory controller issues PIM requests to one group of memory banks and concurrently issues memory requests to one or more other groups of memory banks. Accordingly, memory requests are performed on groups of memory banks that would otherwise be idle while PIM requests are performed on the one group of memory banks. Optionally, the…

Memory Control for Data Processing Pipeline Optimization

Granted: March 28, 2024
Application Number: 20240103719
Generating optimization instructions for data processing pipelines is described. A pipeline optimization system computes resource usage information that describes memory and compute usage metrics during execution of each stage of the data processing pipeline. The system additionally generates data storage information that describes how data output by each pipeline stage is utilized by other stages of the pipeline. The pipeline optimization system then generates the optimization…

Address Translation Service Management

Granted: March 21, 2024
Application Number: 20240095184
Address translation service management techniques are described. These techniques are based on metadata that is usable to provide a hint as insight into memory access, and based on this, use of a translation lookaside buffer is optimized to control which entries are maintained in the queue and manage address translation requests.

FRAMEWORK FOR COMPRESSION-AWARE TRAINING OF NEURAL NETWORKS

Granted: March 21, 2024
Application Number: 20240095517
Methods and devices are provided for processing data using a neural network. Activations from a previous layer of the neural network are received by a layer of the neural network. Weighted values, to be applied to values of elements of the activations, are determined based on a spatial correlation of the elements and a task error output by the layer. The weighted values are applied to the values of the elements and a combined error is determined based on the task error and the spatial…

SYSTEMS AND METHODS FOR INTERPOLATING REGISTER-BASED LOOKUP TABLES

Granted: March 21, 2024
Application Number: 20240095180
The disclosed computer-implemented method for interpolating register-based lookup tables can include identifying, within a set of registers, a lookup table that has been encoded for storage within the set of registers. The method can also include receiving a request to look up a value in the lookup table and responding to the request by interpolating, from the encoded lookup table stored in the set of registers, a representation of the requested value. Various other methods, systems, and…

Error Correction for Stacked Memory

Granted: March 14, 2024
Application Number: 20240087667
Error correction for stacked memory is described. In accordance with the described techniques, a system includes a plurality of error correction code engines to detect vulnerabilities in a stacked memory and coordinate at least one vulnerability detected for a portion of the stacked memory to at least one other portion of the stacked memory.

Dynamic Memory Operations

Granted: March 14, 2024
Application Number: 20240087636
Dynamic memory operations are described. In accordance with the described techniques, a system includes a stacked memory and one or more memory monitors configured to monitor conditions of the stacked memory. A system manager is configured to receive the monitored conditions of the stacked memory from the one or more memory monitors, and dynamically adjust operation of the stacked memory based on the monitored conditions. In one or more implementations, a system includes a memory and at…

FERROELECTRIC RANDOM-ACCESS MEMORY WITH ENHANCED LIFETIME, DENSITY, AND PERFORMANCE

Granted: March 14, 2024
Application Number: 20240087632
A memory device includes memory cells. A memory cell of the memory cells includes gate circuitry, a first capacitor, and a second capacitor. The gate circuitry is connected to a wordline and a bitline. The first capacitor is connected to the gate circuitry and a first drive line. The second capacitor is connected to the gate circuitry and a second drive line.

OVERLAY TREES FOR RAY TRACING

Granted: March 14, 2024
Application Number: 20240087223
A method and a processing device for performing rendering are disclosed. The method comprises generating a base hierarchy tree comprising data representing a first object and generating a second hierarchy tree representing a second object comprising shared data of the base hierarchy tree and the second hierarchy tree and difference data. The method further comprises storing the difference data in the memory without storing the shared data, and generating an overlay hierarchy tree…

TWO-LEVEL PRIMITIVE BATCH BINNING WITH HARDWARE STATE COMPRESSION

Granted: March 14, 2024
Application Number: 20240087078
Methods, devices, and systems for rendering primitives in a frame. During a visibility pass, state packets are processed to determine a register state, and the register state is stored in a memory device. During a rendering pass, the state packets are discarded and the register state is read from the memory device. In some implementations, a graphics pipeline is configured during the visibility pass based on the register state determined by processing the state packets, and the graphics…

LOCALITY-BASED DATA PROCESSING

Granted: March 7, 2024
Application Number: 20240078197
A data processing node includes a processor element and a data fabric circuit. The data fabric circuit is coupled to the processor element and to a local memory element and includes a crossbar switch. The data fabric circuit is operable to bypass the crossbar switch for memory access requests between the processor element and the local memory element.

SYSTEMS, METHODS, AND DEVICES FOR ADVANCED MEMORY TECHNOLOGY

Granted: March 7, 2024
Application Number: 20240078195
An electronic device includes a processor having processor circuitry and a leader memory controller, a controller coupled to the processor and having a follower memory controller, and a memory. The processor circuitry is operable to access the memory by issuing memory access requests to the leader memory controller. The leader memory controller is operable to complete the memory access requests using the follower memory controller to issue memory commands to the at least one memory die.

MEMORY CONTROLLER AND NEAR-MEMORY SUPPORT FOR SPARSE ACCESSES

Granted: March 7, 2024
Application Number: 20240078017
A data processing system includes a data processor and a memory controller receiving memory access requests from the data processor and generating at least one memory access cycle to a memory system in response to the receiving. The memory controller includes a command queue and a sparse element processor. The command queue is for receiving and storing the memory access requests including a first memory access request including a small element request. The sparse element processor is for…

EFFICIENT RANK SWITCHING IN MULTI-RANK MEMORY CONTROLLER

Granted: February 29, 2024
Application Number: 20240069811
A data processing system includes a memory accessing agent for generating first memory access requests, a first memory system, and a first memory controller. The first memory system includes a first three-dimensional memory stack comprising a first plurality of stacked memory dice, wherein each memory die of the first three-dimensional memory stack includes a different logical rank of a first memory channel. The first memory controller picks second memory access requests from among the…