DROOP MITIGATION FOR AN INTER-CHIPLET INTERFACE
Granted: January 4, 2024
Application Number:
20240004821
Systems and methods are disclosed for voltage droop mitigation associated with a voltage rail that supplies power to circuitry of a chiplet. Techniques disclosed include detecting an upcoming transmission of data packets that are to be transmitted through a physical layer of the chiplet. Then, before transmitting the data packets through the physical layer, throttling a rate of bandwidth utilization in the physical layer and transmitting, by the controller, the data packets through the…
SCHEDULING TRAINING OF AN INTER-CHIPLET INTERFACE
Granted: January 4, 2024
Application Number:
20240004815
Systems and methods are disclosed for scheduling a data link training by a controller. The system and method include receiving an indication that a physical layer of a data link is not transferring data and initiating a training process of the physical layer of the data link in response to the indication that the physical layer of the data link is not transferring data. In one aspect, the indication that the physical layer of a data link is not transferring data is an indication that the…
ACCELERATING PREDICATED INSTRUCTION EXECUTION IN VECTOR PROCESSORS
Granted: January 4, 2024
Application Number:
20240004656
Methods and systems are disclosed for processing a vector by a vector processor. Techniques disclosed include receiving predicated instructions by a scheduler, each of which is associated with an opcode, a vector of elements, and a predicate. The techniques further include executing the predicated instructions. Executing a predicated instruction includes compressing, based on an index derived from a predicate of the instruction, elements in a vector of the instruction, where the elements…
TECHNIQUE TO ENABLE SIMULTANEOUS USE OF ON-DIE SRAM AS CACHE AND MEMORY
Granted: December 28, 2023
Application Number:
20230418745
A technique for operating a cache is disclosed. The technique includes utilizing a first portion of a cache in a directly accessed manner; and utilizing a second portion of the cache as a cache.
NOISE MITIGATION IN SINGLE-ENDED LINKS
Granted: December 28, 2023
Application Number:
20230421203
An integrated circuit includes a first terminal for receiving a data signal, a second terminal for receiving an external reference voltage, a receiver, and a reference voltage generation circuit. The receiver is powered by a power supply voltage with respect to ground and has a first input coupled to the first terminal, a second input for receiving a shared reference voltage, and an output for providing a data input signal. The reference voltage generation circuit is coupled to the…
METHOD AND APPARATUS FOR RECOVERING REGULAR ACCESS PERFORMANCE IN FINE-GRAINED DRAM
Granted: December 28, 2023
Application Number:
20230420036
A fine-grained dynamic random-access memory (DRAM) includes a first memory bank, a second memory bank, and a dual mode I/O circuit. The first memory bank includes a memory array divided into a plurality of grains, each grain including a row buffer and input/output (I/O) circuitry. The dual-mode I/O circuit is coupled to the I/O circuitry of each grain in the first memory bank, and operates in a first mode in which commands having a first data width are routed to and fulfilled…
CHANNEL ROUTING FOR SIMULTANEOUS SWITCHING OUTPUTS
Granted: December 28, 2023
Application Number:
20230420018
A data processor is for accessing a memory having a first pseudo channel and a second pseudo channel. The data processor includes at least one memory accessing agent, a memory controller, and a data fabric. The at least one memory accessing agent generates generating memory access requests including first memory access requests that access the memory. The memory controller provides memory commands to the memory in response to the first memory access requests. The data fabric routes the…
ENABLING ACCELERATED PROCESSING UNITS TO PERFORM DATAFLOW EXECUTION
Granted: December 28, 2023
Application Number:
20230418782
Methods and systems are disclosed for performing dataflow execution by an accelerated processing unit (APU). Techniques disclosed include decoding information from one or more dataflow instructions. The decoded information is associated with dataflow execution of a computational task. Techniques disclosed further include configuring, based on the decoded information, dataflow circuitry, and, then, executing the dataflow execution of the computational task using the dataflow circuitry.
MEMORY CONTROLLER WITH PSEUDO-CHANNEL SUPPORT
Granted: December 28, 2023
Application Number:
20230418772
A data processor accesses a memory having a first pseudo channel and a second pseudo channel. The data processor includes at least one memory accessing agent for generating a memory access request, a memory controller for providing a memory command to the memory in response to a normalized request selectively using a first pseudo channel pipeline circuit and a second pseudo channel pipeline circuit, and a data fabric for converting the memory access request into the normalized request…
ALLOCATION CONTROL FOR CACHE
Granted: December 28, 2023
Application Number:
20230418753
A technique for operating a cache is disclosed. The technique includes based on a workload change, identifying a first allocation permissions policy; operating the cache according to the first allocation permissions policy; based on set sampling, identifying a second allocation permissions policy; and operating the cache according to the second allocation permissions policy.
LIVE PROFILE-DRIVEN CACHE AGING POLICIES
Granted: December 28, 2023
Application Number:
20230418744
A technique for operating a cache is disclosed. The technique includes recording access data for a first set of memory accesses of a first frame; identifying parameters for a second set of memory accesses of a second frame subsequent to the first frame, based on the access data; and applying the parameters to the second set of memory accesses.
ARTIFICIAL NEURAL NETWORK EMULATION OF HOTSPOTS
Granted: December 21, 2023
Application Number:
20230409982
Methods, devices, and systems for emulating a compute kernel with an ANN. The compute kernel is executed on a processor, and it is determined whether the compute kernel is a hotspot kernel. If the compute kernel is a hotspot kernel, the compute kernel is emulated with an ANN, and the ANN is substituted for the compute kernel.
Neural Network Activation Scaled Clipping Layer
Granted: December 21, 2023
Application Number:
20230409868
Activation scaled clipping layers for neural networks are described. An activation scaled clipping layer processes an output of a neuron in a neural network using a scaling parameter and a clipping parameter. The scaling parameter defines how numerical values are amplified relative to zero. The clipping parameter specifies a numerical threshold that causes the neuron output to be expressed as a value defined by the numerical threshold if the neuron output satisfies the numerical…
PARTIAL SORTING FOR COHERENCY RECOVERY
Granted: December 21, 2023
Application Number:
20230409337
Devices and methods for partial sorting for coherence recovery are provided. The partial sorting is efficiently executed by utilizing existing hardware along the memory path (e.g., memory local to the compute unit). The devices include an accelerated processing device which comprises memory and a processor. The processor is, for example, a compute unit of a GPU which comprises a plurality of SIMD units and is configured to determine, for data entries each comprising a plurality of bits,…
VLIW Dynamic Communication
Granted: December 21, 2023
Application Number:
20230409336
In accordance with described techniques for VLIW Dynamic Communication, an instruction that causes dynamic communication of data to at least one processing element of a very long instruction word (VLIW) machine is dispatched to a plurality of processing elements of the VLIW machine. A first count of data communications issued by the plurality of processing elements and a second count of data communications served by the plurality of processing elements are maintained. At least one…
METHOD AND APPARATUS FOR TRAINING MEMORY
Granted: December 21, 2023
Application Number:
20230409232
A method and apparatus for training data in a computer system includes reading data stored in a first memory address in a memory and writing it to a buffer. Training data is generated for transmission to the first memory address. The data is transmitted to the first memory address. Information relating to the training data is read from the first memory address and the stored data is read from the buffer and written to the memory area where the training data was transmitted.
MEMORY POOLS IN A MEMORY MODEL FOR A UNIFIED COMPUTING SYSTEM
Granted: December 14, 2023
Application Number:
20230401159
A method and system for providing memory in a computer system. The method includes receiving a memory access request for a shared memory address from a processor, mapping the received memory access request to at least one virtual memory pool to produce a mapping result, and providing the mapping result to the processor.
TECHNIQUES FOR POWER SAVINGS, IMPROVED SECURITY, AND ENHANCED USER PERCEPTUAL AUDIO
Granted: December 14, 2023
Application Number:
20230400905
A technique for operating a device is disclosed. The technique includes attempting to detect presence of a user based on emitted and reflected audio signals; and controlling power state of the device based on the attempting.
SYSTEM AND METHOD FOR APPLICATION MIGRATION FOR A DOCKABLE DEVICE
Granted: December 7, 2023
Application Number:
20230393995
Described is a method and apparatus for application migration between a dockable device and a docking station in a seamless manner. The dockable device includes a processor and the docking station includes a high-performance processor. The method includes executing at least one application in the dockable device using a first processor, and initiating an application migration for the at least one application from the first processor to a second processor in a docking station responsive…
REDUCING SYSTEM POWER CONSUMPTION WHEN CAPTURING DATA FROM A USB DEVICE
Granted: November 30, 2023
Application Number:
20230384855
Systems and methods are disclosed for reducing power consumed by capturing data from an I/O device. Techniques disclosed include receiving descriptors, by a controller of an I/O host of a system, including information associated with respective data chunks to be captured from an I/O device buffer of the I/O device. Techniques disclosed further include capturing, based on the descriptors, the data chunks. The capturing comprises pulling the data chunks from the I/O device buffer at a…