Cavium Patent Applications

WORK MIGRATION IN A PROCESSOR

Granted: July 31, 2014
Application Number: 20140215478
A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. A work product may be migrated between lookup engines to complete the…

LOOKUP FRONT END PACKET OUTPUT PROCESSOR

Granted: July 3, 2014
Application Number: 20140188973
A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from a host, and processes these lookup requests to generate key requests for forwarding to the lookup engines. As a result of the rule matching, the lookup engine returns…

System and Method for Optimizing Use of Channel State Information

Granted: July 3, 2014
Application Number: 20140185722
The present invention relates to a combiner, channel identifier, Orthogonal Frequency Division Multiplexing OFDM receiver and method for optimizing use of channel state information of a received signal. The method comprises analyzing a received signal in a time domain and extracting from the received signal characteristics of a communication channel. The method furthermore comprises determining a dynamic indicator of channel station information accuracy based on the characteristics of…

LOOKUP FRONT END PACKET INPUT PROCESSOR

Granted: May 1, 2014
Application Number: 20140119378
A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from a host, and processes these lookup requests to generate key requests for forwarding to the lookup engines. As a result of the rule matching, the lookup engine returns…

LEVEL-UP SHIFTER CIRCUIT

Granted: March 27, 2014
Application Number: 20140084985
A level-up shifter circuit is suitable for high speed and low power applications. The circuit dissipates almost no static power, or leakage current, compared to conventional designs and can preserve the signal's duty cycle even at high data rates. This circuit can be used with a wide range of power supplies while maintaining operational integrity.

MESSAGING WITH FLEXIBLE TRANSMIT ORDERING

Granted: March 20, 2014
Application Number: 20140079071
In one embodiment, a system includes a packet reception unit. The packet reception unit is configured to receive a packet, create a header indicating scheduling of the packet in a plurality of cores and concatenate the header and the packet. The header is based on the content of the packet. In one embodiment, a system includes a transmit silo configured to store a multiple fragments of a packet, the fragments having been sent to a destination and the transmit silo having not received an…

Multiple Core Session Initiation Protocol (SIP)

Granted: February 27, 2014
Application Number: 20140059241
A Session Initiation Protocol (SIP) proxy server including a multi-core central processing unit (CPU) is presented. The multi-core CPU includes a receiving core dedicated to pre-SIP message processing. The pre-SIP message processing may include message retrieval, header and payload parsing, and Call-ID hashing. The Call-ID hashing is used to determine a post-SIP processing core designated to process messages between particular user pair. The pre-SIP and post-SIP configuration allows for…

Content Search Mechanism That Uses A Deterministic Finite Automata (DFA) Graph, A DFA State Machine, And A Walker Process

Granted: January 30, 2014
Application Number: 20140032607
An improved content search mechanism uses a graph that includes intelligent nodes avoids the overhead of post processing and improves the overall performance of a content processing application. An intelligent node is similar to a node in a DFA graph but includes a command. The command in the intelligent node allows additional state for the node to be generated and checked. This additional state allows the content search mechanism to traverse the same node with two different…

High Speed Variable Bandwidth Ring-Based System

Granted: November 28, 2013
Application Number: 20130315236
In one embodiment, a system includes a station circuit. The station circuit includes a data layer and a transport layer. The station circuit is capable of a source mode and a destination mode. The data layer of the station circuit in source mode disassembles a source packet into one or more source parcels and sends the one or more source parcels to the transport layer. The station circuit in destination mode receives the one or more destination parcels over a ring at its transport layer,…

Hardware and Software Association and Authentication

Granted: September 26, 2013
Application Number: 20130254906
Authentication and association of hardware and software is accomplished by loading a secure code from an external memory at startup time and authenticating the program code using an authentication key. Access to full hardware and software functionality may be obtained upon authentication of the secure code. However, if the authentication of the secure code fails, an unsecure code that provides limited functionality to hardware and software resources is executed.

LOOKUP CLUSTER COMPLEX

Granted: September 26, 2013
Application Number: 20130250948
A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. As a result of the rule matching, the lookup engine returns a response…

System And Method Of Compression And Decompression

Granted: September 26, 2013
Application Number: 20130249716
The disclosure relates to a system and a method for hardware encoding and decoding according to the Limpel Ziv STAC (LZS) and Deflate protocols based upon a configuration bit.

PHASED BUCKET PRE-FETCH IN A NETWORK PROCESSOR

Granted: September 12, 2013
Application Number: 20130239193
A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. Each of the lookup engines receives a key request associated with a packet and determines a subset of the rules to match against the packet data. Based on a prefetch status, a selection of the subset of rules are…

DUPLICATION IN DECISION TREES

Granted: September 5, 2013
Application Number: 20130232104
A packet classification system, apparatus, and corresponding apparatus are provided for enabling packet classification. A processor of a security appliance coupled to a network uses a classifier table having a plurality of rules, the plurality of rules having at least one field, to build a decision tree structure for packet classification. Duplication in the decision tree may be identified, producing a wider, shallower decision tree that may result in shorter search times with reduced…

Rule Modification in Decision Trees

Granted: August 22, 2013
Application Number: 20130218853
A system, apparatus, and method are provided for modifying rules in-place atomically from the perspective of an active search process using the rules for packet classification. A rule may be modified in-place by updating a rule's definition to be an intersection of an original and new definition. The rule's definition may be further updated to the rule's new definition and a decision tree may used updated based on the rule's new definition. While a search processor searches for one or…

REVERSE NFA GENERATION AND PROCESSING

Granted: May 23, 2013
Application Number: 20130133064
In a processor of a security appliance, an input of a sequence of characters is walked through a finite automata graph generated for at least one given pattern. At a marked node of the finite automata graph, if a specific type of the at least one given pattern is matched at the marked node, the input sequence of characters is processed through a reverse non-deterministic finite automata (rNFA) graph generated for the specific type of the at least one given pattern by walking the input…

PACKET TRAFFIC CONTROL IN A NETWORK PROCESSOR

Granted: May 2, 2013
Application Number: 20130107711
A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor…

MULTI-CORE INTERCONNECT IN A NETWORK PROCESSOR

Granted: May 2, 2013
Application Number: 20130111141
A network processor includes multiple processor cores for processing packet data. In order to provide the processor cores with access to a memory subsystem, an interconnect circuit directs communications between the processor cores and the L2 Cache and other memory devices. The processor cores are divided into several groups, each group sharing an individual bus, and the L2 Cache is divided into a number of banks, each bank having access to a separate bus. The interconnect circuit…

NETWORK PROCESSOR WITH DISTRIBUTED TRACE BUFFERS

Granted: May 2, 2013
Application Number: 20130111073
A network processor includes a cache and a several groups of processors for accessing the cache. A memory interconnect provides for connecting the processors to the cache via a plurality of memory buses. A number of trace buffers are also connected to the bus and operate to store information regarding commands and data transmitted across the bus. The trace buffers share a common address space, thereby enabling access to the trace buffers as a single entity.

WORK REQUEST PROCESSOR

Granted: May 2, 2013
Application Number: 20130111000
A network processor includes a schedule, sync and order (SSO) module for scheduling and assigning work to multiple processors. The SSO includes an on-deck unit (ODU) that provides a table having several entries, each entry storing a respective work queue entry, and a number of lists. Each of the lists may be associated with a respective processor configured to execute the work, and includes pointers to entries in the table. A pointer is added to the list based on an indication of whether…