AMD Patent Applications

TAG AND DATA CONFIGURATION FOR FINE-GRAINED CACHE MEMORY

Granted: April 4, 2024
Application Number: 20240111425
A method for operating a memory having a plurality of banks accessible in parallel, each bank including a plurality of grains accessible in parallel is provided. The method includes: based on a memory access request that specifies a memory address, identifying a set that stores data for the memory access request, wherein the set is spread across multiple grains of the plurality of grains; and performing operations to satisfy the memory access request, using entries of the set stored…

Connection Modification based on Traffic Pattern

Granted: April 4, 2024
Application Number: 20240111421
Connection modification based on traffic pattern is described. In accordance with the described techniques, a traffic pattern of memory operations across a set of connections between at least one device and at least one memory is monitored. The traffic pattern is then compared to a threshold traffic pattern condition, such as an amount of data traffic in different directions across the connections. A traffic direction of at least one connection of the set of connections is modified based…

SPECULATIVE DRAM REQUEST ENABLING AND DISABLING

Granted: April 4, 2024
Application Number: 20240111420
Methods, devices, and systems for retrieving information based on cache miss prediction. It is predicted, based on a history of cache misses at a private cache, that a cache lookup for the information will miss a shared victim cache. A speculative memory request is enabled based on the prediction that the cache lookup for the information will miss the shared victim cache. The information is fetched based on the enabled speculative memory request.

INCREASING SYSTEM POWER EFFICIENCY BY OPTICAL COMPUTING

Granted: April 4, 2024
Application Number: 20240111355
Methods and systems are disclosed for reducing power consumption by a system including a digital unit and an optical unit. Techniques disclosed comprise generating a workload signature of an incoming workload to be executed by the system. Based on the generated workload signature, techniques disclosed comprise matching the incoming workload with a profile of stored workload profiles. The workload profiles are generated by a trace capture unit. Based on the associated profile, a task…

A/D Bit Storage, Processing, and Modes

Granted: March 28, 2024
Application Number: 20240104023
A/D bit storage, processing, and mode management techniques through use of a dense A/D bit representation are described. In one example, a memory management unit employs an A/D bit representation generation module to generate the dense A/D bit representation. In an implementation, the A/D bit representation is stored adjacent to existing page table structures of the multilevel page table hierarchy. In another example, memory management unit supports use of modes as part of A/D bit…

METHOD AND SYSTEM FOR DISTRIBUTING KEYS

Granted: March 28, 2024
Application Number: 20240106813
A method and system for distributing keys in a key distribution system includes receiving a connection for communication from a first component. A determination is made whether the first component requires a key be generated and distributed. Based upon a security mode for the communication, the key generated and distributed to the first component.

Filtered Responses of Memory Operation Messages

Granted: March 28, 2024
Application Number: 20240106782
In accordance with described techniques for filtered responses to memory operation messages, a computing system or computing device includes a memory system that receives messages. A filter component in the memory system receives the responses to the memory operation messages, and filters one or more of the responses based on a filterable condition. A tracking logic component tracks the one or more responses as filtered responses for communication completion.

DROOP DETECTION AND CONTROL OF DIGITAL FREQUENCY-LOCKED LOOP

Granted: March 28, 2024
Application Number: 20240106438
An integrated circuit includes a power supply monitor, a clock generator, and a divider. The power supply monitor is operable to provide a trigger signal in response to a power supply voltage dropping below a threshold voltage. The clock generator is operable to provide a first clock signal having a frequency dependent on a value of a frequency control word, and to change the frequency of the first clock signal over time using a native slope in response to a change in the frequency…

MULTI-RESOLUTION GEOMETRIC REPRESENTATION USING BOUNDING VOLUME HIERARCHY FOR RAY TRACING

Granted: March 28, 2024
Application Number: 20240104844
Devices and methods for multi-resolution geometric representation for ray tracing are described which include casting a ray in a space comprising objects represented by geometric shapes and approximating a volume of the geometric shapes using an accelerated hierarchy structure. The accelerated hierarchy structure comprises first nodes each representing a volume of one of the geometric shapes in the space and second nodes each representing an approximate volume of a group of the geometric…

DEVICE AND METHOD OF IMPLEMENTING SUBPASS INTERLEAVING OF TILED IMAGE RENDERING

Granted: March 28, 2024
Application Number: 20240104685
Devices and methods method of tiled rendering are provided which comprises dividing a frame to be rendered, into a plurality of tiles, receiving commands to execute a plurality of subpasses of the tiles, interleaving execution of same subpasses of multiple tiles of the frame by executing one or more subpasses as skip operations, storing visibility data, for subsequently ordered subpasses of the tiles, at memory addresses allocated for data of corresponding adjacent tiles in a first…

Data Compression and Decompression for Processing in Memory

Granted: March 28, 2024
Application Number: 20240104015
In accordance with the described techniques for data compression and decompression for processing in memory, a page address is received by a processing in memory component that maps to a first location in memory where data of a page is maintained. The data of the page is compressed by the processing in memory component. Further, compressed data of the page is written by the processing in memory component to a compressed block device responsive to the compressed data satisfying one or…

DIVERSIFIED VIRTUAL MEMORY

Granted: March 28, 2024
Application Number: 20240103897
Systems and methods are disclosed for managing diversified virtual memory by an engine. Techniques disclosed include receiving one or more request messages, each request message including a job descriptor that specifies an operation to be performed on a respective virtual memory space, processing the job descriptors by generating one or more commands for transmission to one or more virtual memory managers, and transmitting the one or more commands to the one or more virtual memory…

Block Data Load with Transpose into Memory

Granted: March 28, 2024
Application Number: 20240103879
Block data load with transpose techniques are described. In one example, an input is received, at a control unit, specifying an instruction to load a block of data to at least one memory module using a transpose operation. Responsive to the receiving the input by the control unit, the block of data is caused to be loaded to the at least one memory module by transposing the block of data to form a transposed block of data and storing the transposed block of data in the at least one…

Predicates for Processing-in-Memory

Granted: March 28, 2024
Application Number: 20240103860
Predicates for processing in memory is described. In accordance with the described techniques, a predicate instruction to compute a conditional value based on data stored in a memory is provided to a processing-in-memory component. A response that includes the conditional value computed by the processing-in-memory component is received, and the conditional value is stored in a predicate register. One or more conditional instructions are provided to the processing-in-memory component…

Bank-Level Parallelism for Processing in Memory

Granted: March 28, 2024
Application Number: 20240103763
In accordance with the described techniques for bank-level parallelism for processing in memory, a plurality of commands are received for execution by a processing in memory component embedded in a memory. The memory includes a first bank and a second bank. The plurality of commands include a first stream of commands which cause the processing in memory component to perform operations that access the first bank and a second stream of commands which cause the processing in memory…

Scheduling Processing-in-Memory Requests and Memory Requests

Granted: March 28, 2024
Application Number: 20240103745
A memory controller coupled to a memory module receives both processing-in-memory (PIM) requests and memory requests from a host (e.g., a host processor). The memory controller issues PIM requests to one group of memory banks and concurrently issues memory requests to one or more other groups of memory banks. Accordingly, memory requests are performed on groups of memory banks that would otherwise be idle while PIM requests are performed on the one group of memory banks. Optionally, the…

Multi-Level Cell Memory Management

Granted: March 28, 2024
Application Number: 20240103739
Multi-level cell memory management techniques are described. In one example, the memory controller is configured to control whether a single-level cell operation or a multi-level cell operation to be used using different mapping schemes. The single-level cell operation, for instance, is usable to store a data word using two states whereas the multi-level cell operation is usable to store the data word by also using an intermediate state. In order to store the data word using two states,…

Reduction of Parallel Memory Operation Messages

Granted: March 28, 2024
Application Number: 20240103730
In accordance with described techniques for reduction of parallel memory operation messages, a computing system or computing device includes a memory system that receives memory operation messages. A shared response component in the memory system receives responses to the memory operation messages, and identifies a set of the responses that are coalesceable. The shared response component then coalesces the set of the responses into a combined message for communication completion through…

Memory Control for Data Processing Pipeline Optimization

Granted: March 28, 2024
Application Number: 20240103719
Generating optimization instructions for data processing pipelines is described. A pipeline optimization system computes resource usage information that describes memory and compute usage metrics during execution of each stage of the data processing pipeline. The system additionally generates data storage information that describes how data output by each pipeline stage is utilized by other stages of the pipeline. The pipeline optimization system then generates the optimization…

FRAMEWORK FOR COMPRESSION-AWARE TRAINING OF NEURAL NETWORKS

Granted: March 21, 2024
Application Number: 20240095517
Methods and devices are provided for processing data using a neural network. Activations from a previous layer of the neural network are received by a layer of the neural network. Weighted values, to be applied to values of elements of the activations, are determined based on a spatial correlation of the elements and a task error output by the layer. The weighted values are applied to the values of the elements and a combined error is determined based on the task error and the spatial…