DYNAMIC RANDOM-ACCESS MEMORY (DRAM) TRAINING ACCELERATION
Granted: April 27, 2023
Application Number:
20230132306
A method for performing read training of a memory channel includes writing a data pattern to a memory using a data bus having a predetermined number of bit lanes. An edge of a read data eye is determined individually for each bit lane by reading the data pattern over the data bus using a read bust cycle having a predetermined length, grouping data received on each bit lane over the read burst cycle to form a bit lane data group, and comparing the bit lane data group to corresponding…
ERROR RECOVERY FOR NON-VOLATILE MEMORY MODULES
Granted: April 27, 2023
Application Number:
20230125792
A memory controller includes a command queue, a memory interface queue, at least one storage queue, and a replay control circuit. The command queue has a first input for receiving memory access commands. The memory interface queue receives commands selected from the command queue and couples to a heterogeneous memory channel which is coupled to at least one non-volatile storage class memory (SCM) module. The at least one storage queue stores memory access commands that are placed in the…
MULTI-ADAPTIVE CACHE REPLACEMENT POLICY
Granted: April 6, 2023
Application Number:
20230109344
Techniques for performing cache operations are provided. The techniques include tracking performance events for a plurality of test sets of a cache, detecting a replacement policy change trigger event associated with a test set of the plurality of test sets, and in response to the replacement policy change trigger event, operating non-test sets of the cache according to a replacement policy associated with the test set.
CACHE MISS PREDICTOR
Granted: April 6, 2023
Application Number:
20230108964
Methods, devices, and systems for retrieving information based on cache miss prediction. A prediction that a cache lookup for the information will miss a cache is made based on a history table. The cache lookup for the information is performed based on the request. A main memory fetch for the information is begun before the cache lookup completes, based on the prediction that the cache lookup for the information will miss the cache. In some implementations, the prediction includes…
CACHE ALLOCATION POLICY
Granted: April 6, 2023
Application Number:
20230105709
A cache includes an upstream port, a downstream port, a cache memory, and a control circuit. The control circuit temporarily stores memory access requests received from the upstream port, and checks for dependencies for a new memory access request with older memory access requests temporarily stored therein. If one of the older memory access requests creates a false dependency with the new memory access request, the control circuit drops an allocation of a cache line to the cache memory…
METHOD AND APPARATUS FOR MANAGING A CONTROLLER IN A POWER DOWN STATE
Granted: March 30, 2023
Application Number:
20230099399
A method and apparatus for managing a controller includes indicating, by a processor of a first device, to the controller of a second device to enter a second power state from a first power state. The controller of the second device responds to the processor of the first device with a confirmation. The processor of the first device transmits a signal to the controller of the second device to enter the second power state. Upon receiving a wake event, the controller of the second device…
POWER SAVING THROUGH DELAYED MESSAGE PROCESSING
Granted: March 30, 2023
Application Number:
20230103054
Systems and methods are disclosed for reducing the power consumption of a system. Techniques are described that queue a message, sent by a source engine of the system, in a queue of a destination engine of the system that is in a sleep mode. Then, a priority level associated with the queued message is determined. If the priority level is at a maximum level, the destination engine is brought into an active mode. If the priority level is at an intermediate level, the destination engine is…
RE-REFERENCE INTERVAL PREDICTION (RRIP) WITH PSEUDO-LRU SUPPLEMENTAL AGE INFORMATION
Granted: March 30, 2023
Application Number:
20230102891
Systems and methods for cache replacement are disclosed. Techniques are described that determine a re-reference interval prediction (RRIP) value of respective data blocks in a cache, where an RRIP value represents a likelihood that a respective data block will be re-used within a time interval. Upon an access, by a processor, to a data segment in a memory, if the data segment is not stored in the cache, a data block in the cache to be replaced by the data segment is selected, utilizing a…
SYSTEM AND METHODS FOR EFFICIENT EXECUTION OF A COLLABORATIVE TASK IN A SHADER SYSTEM
Granted: March 30, 2023
Application Number:
20230102767
Methods and systems are disclosed for executing a collaborative task in a shader system. Techniques disclosed include receiving, by the system, input data and computing instructions associated with the collaborative task, as well as a configuration setting, causing the system to operate in a takeover mode. The system then launches, exclusively in one workgroup processor, a workgroup including wavefronts configured to execute the collaborative task.
STACKED COMMAND QUEUE
Granted: March 30, 2023
Application Number:
20230102680
A memory controller includes a command queue with multiple entry stacks, each with a plurality of entries holding memory access commands, one or more parameter indicators each holding a respective characteristic common to the plurality of entries, and a head indicator designating a current entry for arbitration. An arbiter has a single command input for each entry stack. A command queue loader circuit receives incoming memory access commands and loads entries of respective entry stacks…
LOW LATENCY AUGMENTED REALITY ARCHITECTURE FOR CAMERA ENABLED DEVICES
Granted: March 30, 2023
Application Number:
20230093933
Systems and methods are disclosed that provide low latency augmented reality architecture for camera enabled devices. Systems and methods of communication between system components are presented that use a hybrid communication protocol. Techniques include communications between system components that involve one-way transactions. A hardware message controller is disclosed that controls out-buffers and in-buffers to facilitate the hybrid communication protocol.
STORING AN INDICATION OF A SPECIFIC DATA PATTERN IN SPARE DIRECTORY ENTRIES
Granted: March 30, 2023
Application Number:
20230099256
A system and method for omission of probes when requesting data stored in memory where the omission includes creating a coherence directory entry, determining whether cache line data for the coherence directory entry is a trackable pattern, and setting an indication indicating that one or more reads for the cache line data can be serviced without sending probes. A system and method for providing extra data storage capacity in a coherence directory where the extra data storage capacity…
HIGH TO LOW LEVEL SHIFTER ARCHITECTURE USING LOWER VOLTAGE DEVICES
Granted: March 30, 2023
Application Number:
20230098336
A voltage level-shifting circuit for an integrated circuit includes an input terminal receiving a voltage signal referenced to an input/output (PO) voltage level. A transistor overvoltage protection circuit includes a first p-type metal oxide semiconductor (PMOS) transistor includes a source coupled to the second voltage supply, a gate receiving an enable signal, and a drain connected to a central node. A first n-type metal oxide semiconductor (NMOS) transistor includes a drain connected…
MEMORY MANAGEMENT IN GRAPHICS AND COMPUTE APPLICATION PROGRAMMING INTERFACES
Granted: March 30, 2023
Application Number:
20230097620
Methods are provided for creating objects in a way that permits an API client to explicitly participate in memory management for an object created using the API. Methods for managing data object memory include requesting memory requirements for an object using an API and expressly allocating a memory location for the object based on the memory requirements. Methods are also provided for cloning objects such that a state of the object remains unchanged from the original object to the…
ACCELERATION STRUCTURES WITH DELTA INSTANCES
Granted: March 30, 2023
Application Number:
20230097562
Described herein is a technique for performing ray tracing operations. The technique includes encountering, at a non-leaf node, a pointer to a bottom-level acceleration structure having one or more delta instances; identifying an index associated with the pointer, wherein the index identifies an instance within the bottom-level acceleration structure; and obtaining data for the instance based on the pointer and the index.
CONVOLUTIONAL NEURAL NETWORK OPERATIONS
Granted: March 30, 2023
Application Number:
20230097279
Methods and systems are disclosed for executing operations on single-instruction-multiple-data (SIMD) units. Techniques disclosed perform a dot product operation on input data during one computer cycle, including convolving the input data, generating intermediate data, and applying one or more transitional operations to the intermediate data to generate output data. Aspects described, wherein the input data is an input to a layer of a convolutional neural network and the generated output…
RE-REFERENCE INDICATOR FOR RE-REFERENCE INTERVAL PREDICTION CACHE REPLACEMENT POLICY
Granted: March 30, 2023
Application Number:
20230096814
Techniques for performing cache operations are provided. The techniques include tracking re-references for cache lines of a cache, detecting that eviction is to occur, and selecting a cache line for eviction from the cache based on a re-reference indication.
SUPPRESSING CACHE LINE MODIFICATION
Granted: March 30, 2023
Application Number:
20230096563
Disclosed is a system and method for use in a cache for suppressing modification of cache line. The system and method includes a processor and a memory operating cooperatively with a cache controller. The memory includes a coherence directory stored within a cache created to track at least one cache line in the cache via the cache controller. The processor instructs a cache controller to store a first data in a cache line in the cache. The cache controller tags the cache line based on…
ENCODED ENABLE CLOCK GATERS
Granted: March 30, 2023
Application Number:
20230096138
A processing device is provided which includes a processor and a data storage structure. The data storage structure comprises a data storage array comprising a plurality of lines. Each line comprises at least one A latch configured to store a data bit and a clock gater. The data storage structure also comprises a write data B latch configured to store, over different clock cycles, a different data bit, each to be written to the at least one A latch of one of the plurality of lines. The…
METHOD AND APPARATUS FOR ISOLATING AND LATCHING GPIO OUTPUT PADS
Granted: March 30, 2023
Application Number:
20230095622
A method and apparatus for isolating and restoring general-purpose input/output (GPIO) pads in a computer system includes identifying GPIO pads associated with the region responsive to an entry into a power-down state of a region of a circuit. The GPIO pads are isolated from one or more external circuits. Upon exit from the power-down state, each associated GPIO pad is restored to a current value.