The section helps you learn more about the built-in Extension Providers for Jikkou.
This is the multi-page printable view of this section. Click here to print.
Extension Providers
- 1: Core
- 1.1: Resource Repositories
- 1.1.1: Github Resource Repository
- 1.1.2: Local Resource Repository
- 1.2: Resources
- 1.2.1: ConfigMap
- 1.2.2: ValidatingResourcePolicy
- 2: Apache Kafka
- 2.1: Configuration
- 2.2: Resources
- 2.2.1: Kafka Brokers
- 2.2.2: Kafka Consumer Groups
- 2.2.3: Kafka Topics
- 2.2.4: Kafka Users
- 2.2.5: Kafka Authorizations
- 2.2.6: Kafka Quotas
- 2.2.7: Kafka Table Records
- 2.3: Transformations
- 2.3.1: KafkaTopicMaxNumPartitions
- 2.3.2: KafkaTopicMaxRetentionMs
- 2.3.3: KafkaTopicMinInSyncReplicas
- 2.3.4: KafkaTopicMinReplicas
- 2.3.5: KafkaTopicMinRetentionMs
- 2.4: Validations
- 2.5: Annotations
- 2.6: Actions
- 2.7: Compatibility
- 3: Apache Kafka Connect
- 3.1: Configuration
- 3.2: Resources
- 3.2.1: KafkaConnectors
- 3.3: Validations
- 3.4: Annotations
- 3.5: Labels
- 3.6: Actions
- 4: Schema Registry
- 4.1: Configuration
- 4.2: Resources
- 4.2.1: Schema Registry Subjects
- 4.3: Validations
- 4.4: Annotations
- 5: Aiven
- 5.1: Configuration
- 5.2: Resources
- 5.2.1: ACL for Aiven Apache Kafka®
- 5.2.2: Quotas for Aiven Apache Kafka®
- 5.2.3: ACL for Aiven Schema Registry
- 5.2.4: Subject for Aiven Schema Registry
- 5.3: Validations
- 5.4: Annotations
- 6: Aws
- 6.1: Configuration
- 6.2: Resources
- 6.3: Validations
- 6.4: Annotations
1 - Core
Here, you will find information to use the Core extensions.
More information:
1.1 - Resource Repositories
Here, you will find the list of core Resource Repositories supported for Jikkou.
Core Repositories
More information:
1.1.1 - Github Resource Repository
Overview
The GithubResourceRepository can be used to load resources from a public or private GitHub repository.
Configuration
jikkou {
  repositories = [
    {
      # Name of your local repositories  
      name = "<string>"
      # The fully qualified class name (FQCN) of the repository
      type = io.streamthoughts.jikkou.core.repository.GithubResourceRepository
      config {
        # Specify the GitHub repository in the format 'owner/repo'
        repository = "<string>"
  
        # Specify the branch or ref to load resources from
        branch = main
  
        # Specify the paths/directories in the repository containing the resource definitions
        paths = []    # Default: **/*.{yaml,yml}
  
        # Specify the pattern used to match YAML file paths.
        # Pattern should be passed in the form of 'syntax:pattern'. The "glob" and "regex" syntaxes are supported (e.g.: **/*.{yaml,yml}).
        # If no syntax is specified the 'glob' syntax is used.
        file-pattern = "**/*.{yaml,yml}"
        
        # Specify the locations of the values-files containing the variables to pass into the template engine built-in object 'Values'.
        values-files = []
        
        # Specify the pattern used to match YAML file paths when one or multiple directories are given through the `values-files` property.
        # Pattern should be passed in the form of 'syntax:pattern'. The "glob" and "regex" syntaxes are supported (e.g.: **/*.{yaml,yml}).
        # If no syntax is specified the 'glob' syntax is used.
        values-file-name = "<string>" # Default: **/*.{yaml,yml}
        
        # The labels to be added to all resources loaded from the repository
        labels {
          <label_key> = <label_value>
        }
      }
    }
  ]
}
1.1.2 - Local Resource Repository
Overview
The LocalResourceRepository can be used to load resources from local files or directories.
Configuration
jikkou {
  repositories = [
    {
      # Name of your local repositories  
      name = "<string>"
      # The fully qualified class name (FQCN) of the repository
      type = io.streamthoughts.jikkou.core.repository.LocalResourceRepository
      config {
        # Specify the locations containing the definitions for resources in a YAML file, a directory.
        files = []
        
        # Specify the pattern used to match YAML file paths when one or multiple directories are given through the `files` property.
        # Pattern should be passed in the form of 'syntax:pattern'. The "glob" and "regex" syntaxes are supported (e.g.: **/*.{yaml,yml}).
        # If no syntax is specified the 'glob' syntax is used.
        file-name = "<string>"    # Default: **/*.{yaml,yml}
        
        # Specify the locations of the values-files containing the variables to pass into the template engine built-in object 'Values'.
        values-files = []
        
        # Specify the pattern used to match YAML file paths when one or multiple directories are given through the `values-files` property.
        # Pattern should be passed in the form of 'syntax:pattern'. The "glob" and "regex" syntaxes are supported (e.g.: **/*.{yaml,yml}).
        # If no syntax is specified the 'glob' syntax is used.
        values-file-name = "<string>" # Default: **/*.{yaml,yml}
        
        # The labels to add to all resources loaded from the repository
        labels {
          <label_key> = <label_value>
        }
      }
    }
  ]
}
1.2 - Resources
Here, you will find the list of core resources supported for Jikkou.
Core Resources
More information:
1.2.1 - ConfigMap
You can use a ConfigMap to define reusable data in the form of key/value pairs that can then be referenced and used by
other resources.
Specification
---
apiVersion: "core.jikkou.io/v1beta2"
kind: ConfigMap
metadata:
  name: '<CONFIG-MAP-NAME>'   # Name of the ConfigMap (required)
data:                         # Map of key-value pairs (required)
  <KEY_1>: "<VALUE_1>" 
Example
For example, the below ConfigMap show how to define default config properties namedcKafkaTopicConfig that can then
reference and used to define multiple KafkaTopic. resources.
---
apiVersion: "core.jikkou.io/v1beta2"
kind: ConfigMap
metadata:
name: 'KafkaTopicConfig'
data:
  cleanup.policy: 'delete'
  min.insync.replicas: 2
  retention.ms: 86400000 # (1 day)
1.2.2 - ValidatingResourcePolicy
The ValidatingResourcePolicy resource is used to define validation rules applied to resources or resource changes before they are applied by Jikkou.
It allows enforcing organizational policies, validating constraints, or filtering out undesired operations.
Each policy can select one or more resource kinds and define rules expressed in Google CEL (Common Expression Language).
Rules can either fail the execution or filter the invalid resources, depending on the configured failurePolicy.
Specification
apiVersion: core.jikkou.io/v1
kind: ValidatingResourcePolicy
metadata:
  name: <string> # Required. Unique policy name.
spec:
  failurePolicy: <string> # Required. One of: FAIL | FILTER
  selector:
    matchingStrategy: <string> # Optional. One of: ALL | ANY (default: ALL)
    matchResources:
      - apiVersion: <string> # Optional. API version to match (e.g., core.jikkou.io/v1)
        kind: <string>       # Required. Resource kind (e.g., KafkaTopic)
    matchLabels:
      - key: <string>       # Label key to match
        operator: <string>  # One of: In | NotIn | Exists | DoesNotExist
        values: [<string>]  # Optional list of values
    matchExpressions:
      - <string> # CEL expression
  rules:
    - name: <string> # Required. Rule identifier.
      expression: <string> # Required. A CEL expression evaluated against the resource.
      message: <string> # Optional. Static message returned when the rule fails.
      messageExpression: <string> # Optional. CEL expression to generate a dynamic error message.
Fields
| Field | Type | Required | Description | 
|---|---|---|---|
| spec.failurePolicy | string | Yes | Defines the policy behavior when validation fails. Possible values: • FAIL→ stop execution with an error.• FILTER→ skip the invalid resource(s) but continue processing others. | 
| spec.selector.matchingStrategy | string | No | Strategy for combining multiple selectors. Possible values: • ALL→ resource must match all conditions.• ANY→ resource must match at least one condition.Default: ALL. | 
| spec.selector.matchResources | list | No | Selects resources by API version and kind. | 
| spec.selector.matchLabels | list | No | Selects resources based on labels, using operators ( In,NotIn,Exists,DoesNotExist). | 
| spec.selector.matchExpressions | list | No | Selects resources using CEL expressions for advanced filtering. | 
| spec.rules | list | Yes | A list of validation rules. | 
| spec.rules[].name | string | Yes | A unique identifier for the rule. | 
| spec.rules[].expression | string | Yes | A CEL expression evaluated against the resource. The rule fails when the expression evaluates to true. | 
| spec.rules[].message | string | No | Static error message returned when validation fails. | 
| spec.rules[].messageExpression | string | No | CEL expression returning a dynamic error message string. | 
Resource Selection
Policies define which resources they apply to using a selector.
A selector can combine multiple strategies to target resources based on:
- Resource metadata (kind, apiVersion).
- Labels (with operators like In,NotIn,Exists,DoesNotExist).
- CEL expressions (arbitrary conditions on resource content).
Matching Strategy
| Strategy | Description | 
|---|---|
| ALL | The resource must match all specified selectors ( matchResources,matchLabels, andmatchExpressions). | 
| ANY | The resource is selected if it matches at least one of the specified selectors. | 
Default: ALL
matchResources
Selects resources by API version and/or kind.
matchResources:
  - apiVersion: core.jikkou.io/v1
    kind: KafkaTopic
- apiVersion→ Optional. Restricts matching to a specific API group/version.
- kind→ Required. Matches the resource kind (e.g.- KafkaTopic,- KafkaTopicChange).
matchLabels
Selects resources based on their metadata labels using operators.
matchLabels:
  - key: environment
    operator: In
    values: ["prod", "staging"]
  - key: team
    operator: NotIn
    values: ["test"]
  - key: critical
    operator: Exists
Supported operators:
| Operator | Description | 
|---|---|
| In | Matches if the label value is in the list of values. | 
| NotIn | Matches if the label value is not in the list of values. | 
| Exists | Matches if the label key is defined (value doesn’t matter). | 
| DoesNotExist | Matches if the label key is not defined. | 
matchExpressions
Selects resources using CEL expressions for maximum flexibility.
matchExpressions:
  - "resource.metadata.name.startsWith('topic-')"
  - "resource.spec.partitions > 10"
Examples:
- Match resources with names starting with topic-.
- Match topics with more than 10 partitions.
Examples
Example 1: Filtering DELETE operations on KafkaTopic resources
apiVersion: core.jikkou.io/v1
kind: ValidatingResourcePolicy
metadata:
  name: KafkaTopicPolicy
spec:
  failurePolicy: FILTER
  selector:
    matchResources:
      - kind: KafkaTopicChange
  rules:
    - name: FilterDeleteOperation
      expression: "size(resource.spec.changes) > 0 && resource.spec.op == 'DELETE'"
      messageExpression: "'Operation ' + resource.spec.op + ' on topics is not authorized'"
This policy prevents delete operations on Kafka topics from being executed by filtering them out.
Example 2: Validating partitions count for KafkaTopic
apiVersion: core.jikkou.io/v1
kind: ValidatingResourcePolicy
metadata:
  name: KafkaTopicPolicy
spec:
  failurePolicy: FAIL
  selector:
    matchResources:
      - kind: KafkaTopic
  rules:
    - name: MaxTopicPartitions
      expression: "resource.spec.partitions >= 50"
      messageExpression: "'Topic partition MUST be inferior to 50, but was: ' + string(resource.spec.partitions)"
    - name: MinTopicPartitions
      expression: "resource.spec.partitions < 3"
      message: "Topic must have at-least 3 partitions"
This policy enforces a minimum of 3 partitions and a maximum of 49 partitions for Kafka topics.
Example 3: Match only KafkaTopic in prod environment
selector:
  matchingStrategy: ALL
  matchResources:
    - kind: KafkaTopic
  matchLabels:
    - key: environment
      operator: In
      values: ["prod"]
Example 4: Match any KafkaTopic OR resources with label critical=true
selector:
  matchingStrategy: ANY
  matchResources:
    - kind: KafkaTopic
  matchLabels:
    - key: critical
      operator: In
      values: ["true"]
Example 5: Match using CEL expression
selector:
  matchExpressions:
    - "resource.spec.replicationFactor < 3"
Use cases
- Preventing destructive operations (e.g., deleting topics, removing configs).
- Enforcing resource limits (e.g., partition count, replication factor).
- Ensuring naming conventions or metadata compliance.
- Dynamically generating error messages with contextual information.
2 - Apache Kafka
Here, you will find information to use the Apache Kafka extensions.
More information:
2.1 - Configuration
Here, you will find the list of resources supported for Apache Kafka.
Configuration
The Apache Kafka extension is built on top of the Kafka Admin Client. You can configure the properties to be passed to
kafka client through the Jikkou client configuration property jikkou.provider.kafka.
Example:
jikkou {
  provider.kafka {
    enabled = true
    type = io.streamthoughts.jikkou.kafka.KafkaExtensionProvider
    config = {
      client {
        bootstrap.servers = "localhost:9092"
        security.protocol = "SSL"
        ssl.keystore.location = "/tmp/client.keystore.p12"
        ssl.keystore.password = "password"
        ssl.keystore.type = "PKCS12"
        ssl.truststore.location = "/tmp/client.truststore.jks"
        ssl.truststore.password = "password"
        ssl.key.password = "password"
      }
    }
  }
}
In addition, the extension support configuration settings to wait for at least a minimal number of brokers before processing.
jikkou {
  provider.kafka {
    enabled = true
    type = io.streamthoughts.jikkou.kafka.KafkaExtensionProvider
    config = {
      brokers {
        # If 'True' 
        waitForEnabled = true
        waitForEnabled = ${?JIKKOU_KAFKA_BROKERS_WAIT_FOR_ENABLED}
        # The minimal number of brokers that should be alive for the CLI stops waiting.
        waitForMinAvailable = 1
        waitForMinAvailable = ${?JIKKOU_KAFKA_BROKERS_WAIT_FOR_MIN_AVAILABLE}
        # The amount of time to wait before verifying that brokers are available.
        waitForRetryBackoffMs = 1000
        waitForRetryBackoffMs = ${?JIKKOU_KAFKA_BROKERS_WAIT_FOR_RETRY_BACKOFF_MS}
        # Wait until brokers are available or this timeout is reached.
        waitForTimeoutMs = 60000
        waitForTimeoutMs = ${?JIKKOU_KAFKA_BROKERS_WAIT_FOR_TIMEOUT_MS}
      }
    }
  }
}
2.2 - Resources
Here, you will find the list of resources supported for Apache Kafka.
Apache Kafka Resources
More information:
2.2.1 - Kafka Brokers
This section describes the resource definition format for kafkabrokers entities, which can be used to define the
brokers you plan to manage on a specific Kafka cluster.
Listing KafkaBroker
You can retrieve the state of Kafka Consumer Groups using the jikkou get kafkabrokers (or jikkou get kb) command.
Usage
Usage:
Get all 'KafkaBroker' resources.
jikkou get kafkabrokers [-hV] [--default-configs] [--dynamic-broker-configs]
                        [--list] [--static-broker-configs]
                        [--logger-level=<level>] [-o=<format>]
                        [-s=<expressions>]...
DESCRIPTION:
Use jikkou get kafkabrokers when you want to describe the state of all
resources of type 'KafkaBroker'.
OPTIONS:
      --default-configs   Describe built-in default configuration for configs
                            that have a default value.
      --dynamic-broker-configs
                          Describe dynamic configs that are configured as
                            default for all brokers or for specific broker in
                            the cluster.
  -h, --help              Show this help message and exit.
      --list              Get resources as ResourceListObject.
      --logger-level=<level>
                          Specify the log level verbosity to be used while
                            running a command.
                          Valid level values are: TRACE, DEBUG, INFO, WARN,
                            ERROR.
                          For example, `--logger-level=INFO`
  -o, --output=<format>   Prints the output in the specified format. Allowed
                            values: json, yaml (default yaml).
  -s, --selector=<expressions>
                          The selector expression used for including or
                            excluding resources.
      --static-broker-configs
                          Describe static configs provided as broker properties
                            at start up (e.g. server.properties file).
  -V, --version           Print version information and exit.
(The output from your current Jikkou version may be different from the above example.)
Examples
(command)
$ jikkou get kafkabrokers --static-broker-configs
(output)
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaBroker"
metadata:
  name: "101"
  labels: {}
  annotations:
    kafka.jikkou.io/cluster-id: "xtzWWN4bTjitpL3kfd9s5g"
spec:
  id: "101"
  host: "localhost"
  port: 9092
  configs:
    advertised.listeners: "PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092"
    authorizer.class.name: "org.apache.kafka.metadata.authorizer.StandardAuthorizer"
    broker.id: "101"
    controller.listener.names: "CONTROLLER"
    controller.quorum.voters: "101@kafka:29093"
    inter.broker.listener.name: "PLAINTEXT"
    listener.security.protocol.map: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT"
    listeners: "PLAINTEXT://kafka:29092,PLAINTEXT_HOST://0.0.0.0:9092,CONTROLLER://kafka:29093"
    log.dirs: "/var/lib/kafka/data"
    node.id: "101"
    offsets.topic.replication.factor: "1"
    process.roles: "broker,controller"
    transaction.state.log.replication.factor: "1"
    zookeeper.connect: ""
2.2.2 - Kafka Consumer Groups
This section describes the resource definition format for KafkaConsumerGroup entities, which can be used to define the
consumer groups you plan to manage on a specific Kafka cluster.
Listing KafkaConsumerGroup
You can retrieve the state of Kafka Consumer Groups using the jikkou get kafkaconsumergroups (or jikkou get kcg) command.
Usage
$ jikkou get kafkaconsumergroups --help
Usage:
Get all 'KafkaConsumerGroup' resources.
jikkou get kafkaconsumergroups [-hV] [--list] [--offsets]
                               [--logger-level=<level>] [-o=<format>]
                               [--in-states=PARAM]... [-s=<expressions>]...
DESCRIPTION:
Use jikkou get kafkaconsumergroups when you want to describe the state of all
resources of type 'KafkaConsumerGroup'.
OPTIONS:
  -h, --help              Show this help message and exit.
      --in-states=PARAM   If states is set, only groups in these states will be
                            returned. Otherwise, all groups are returned. This
                            operation is supported by brokers with version
                            2.6.0 or later
      --list              Get resources as ResourceListObject.
      --logger-level=<level>
                          Specify the log level verbosity to be used while
                            running a command.
                          Valid level values are: TRACE, DEBUG, INFO, WARN,
                            ERROR.
                          For example, `--logger-level=INFO
  -o, --output=<format>   Prints the output in the specified format. Allowed
                            values: json, yaml (default yaml).
      --offsets           Specify whether consumer group offsets should be
                            described.
  -s, --selector=<expressions>
                          The selector expression used for including or
                            excluding resources.
  -V, --version           Print version information and exit
(The output from your current Jikkou version may be different from the above example.)
Examples
(command)
$ jikkou get kafkaconsumergroups --in-states STABLE --offsets
(output)
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConsumerGroup"
metadata:
  name: "my-group"
  labels:
    kafka.jikkou.io/is-simple-consumer: false
status:
  state: "STABLE"
  members:
    - memberId: "console-consumer-b103994e-bcd5-4236-9d03-97065057e594"
      clientId: "console-consumer"
      host: "/127.0.0.1"
      assignments:
        - "my-topic-0"
      offsets:
        - topic: "my-topic"
          partition: 0
          offset: 0
  coordinator:
    id: "101"
    host: "localhost"
    port: 9092
2.2.3 - Kafka Topics
KafkaTopic resources are used to define the topics you want to manage on your Kafka Cluster(s). A KafkaTopic resource defines the number of partitions, the replication factor, and the configuration properties to be associated to a topics.
KafkaTopic
Specification
Here is the resource definition file for defining a KafkaTopic.
apiVersion: "kafka.jikkou.io/v1beta2"  # The api version (required)
kind: "KafkaTopic"                     # The resource kind (required)
metadata:
  name: <The name of the topic>        # (required)
  labels: { }
  annotations: { }
spec:
  partitions: <Number of partitions>   # (optional)
  replicas: <Number of replicas>       # (optional)
  configs:
    <config_key>: <Config Value>       # The topic config properties keyed by name to override (optional)
  configMapRefs: [ ]                   # The list of ConfigMap to be applied to this topic (optional)
The metadata.name property is mandatory and specifies the name of the kafka topic.
To use the cluster default values for the number of partitions and replicas you can set the property
spec.partitions and spec.replicas to -1.
Example
Here is a simple example that shows how to define a single YAML file containing two topic definition using
the KafkaTopic resource type.
file: kafka-topics.yaml
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopic
metadata:
  name: 'my-topic-p1-r1'  # Name of the topic
  labels:
    environment: example
spec:
  partitions: 1           # Number of topic partitions (use -1 to use cluster default)
  replicas: 1             # Replication factor per partition (use -1 to use cluster default)
  configs:
    min.insync.replicas: 1
    cleanup.policy: 'delete'
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopic
metadata:
  name: 'my-topic-p2-r1'   # Name of the topic 
  labels:
    environment: example
spec:
  partitions: 2             # Number of topic partitions (use -1 to use cluster default)
  replicas: 1               # Replication factor per partition (use -1 to use cluster default)
  configs:
    min.insync.replicas: 1
    cleanup.policy: 'delete'
See official Apache Kafka documentation for details about the topic-level configs.
--- lines.KafkaTopicList
If you need to define multiple topics (e.g. using a template), it may be easier to use a KafkaTopicList resource.
Specification
Here the resource definition file for defining a KafkaTopicList.
apiVersion: "kafka.jikkou.io/v1beta2"  # The api version (required)
kind: "KafkaTopicList"                 # The resource kind (required)
metadata: # (optional)
  labels: { }
  annotations: { }
items: [ ]                             # An array of KafkaTopic
Example
Here is a simple example that shows how to define a single YAML file containing two topic definitions using
the KafkaTopicList resource type. In addition, the example uses a ConfigMap object to define the topic configuration
only once.
file: kafka-topics.yaml
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: KafkaTopicList
metadata:
  labels:
    environment: example
items:
  - metadata:
      name: 'my-topic-p1-r1'
    spec:
      partitions: 1
      replicas: 1
      configMapRefs: [ "TopicConfig" ]
  - metadata:
      name: 'my-topic-p2-r1'
    spec:
      partitions: 2
      replicas: 1
      configMapRefs: [ "TopicConfig" ]
---
apiVersion: "core.jikkou.io/v1beta2"
kind: ConfigMap
metadata:
  name: 'TopicConfig'
data:
  min.insync.replicas: 1
  cleanup.policy: 'delete'
2.2.4 - Kafka Users
This section describes the resource definition format for KafkaUser entities, which can be used to
manage SCRAM Users for Apache Kafka.
Definition Format of KafkaUser
Below is the overall structure of the KafkaUser resource.
---
apiVersion: kafka.jikkou.io/v1  # The api version (required)
kind: KafkaUser                 # The resource kind (required)
metadata:
  name: <string>
  annotations:
    # force update
    kafka.jikkou.io/force-password-renewal: <boolean>
spec:
  authentications:
    - type: <enum> # or 
      password: <string>  # leave empty to generate secure password
See below for details about all these fields.
Metadata
metadata.name [required]
The name of the User.
kafka.jikkou.io/force-password-renewal [optional]
Specification
spec.authentications [required]
The list of authentications to manage for the user.
spec.authentications[].type [required]
The authentication type:
- scram-sha-256
- scram-sha-512
spec.authentications[].password [required]
The password of the user.
Examples
The following is an example of a resource describing a User:
---
# Example: file: kafka-scram-users.yaml
apiVersion: "kafka.jikkou.io/v1"
kind: "User"
metadata:
  name: "Bob"
spec:
  authentications:
    - type: scram-sha-256
      password: null
    - type: scram-sha-512
      password: null
Listing Kafka Users
You can retrieve the SCRAM users of a Kafka cluster using the jikkou get kafkausers (or jikkou get ku) command.
Usage
$ jikkou get kc --help
Usage:
Get all 'KafkaUser' resources.
jikkou get kafkausers [-hV] [--list] [--logger-level=<level>] [--name=<name>]
                      [-o=<format>]
                      [--selector-match=<selectorMatchingStrategy>]
                      [-s=<expressions>]...
DESCRIPTION:
Use jikkou get kafkausers when you want to describe the state of all resources
of type 'KafkaUser'.
OPTIONS:
  -h, --help              Show this help message and exit.
      --list              Get resources as ResourceListObject (default: false).
      --logger-level=<level>
                          Specify the log level verbosity to be used while
                            running a command.
                          Valid level values are: TRACE, DEBUG, INFO, WARN,
                            ERROR.
                          For example, `--logger-level=INFO`
      --name=<name>       The name of the resource.
  -o, --output=<format>   Prints the output in the specified format. Valid
                            values: JSON, YAML (default: YAML).
  -s, --selector=<expressions>
                          The selector expression used for including or
                            excluding resources.
      --selector-match=<selectorMatchingStrategy>
                          The selector matching strategy. Valid values: NONE,
                            ALL, ANY (default: ALL)
  -V, --version           Print version information and exit.
(The output from your current Jikkou version may be different from the above example.)
Examples
(command)
$ jikkou get ku
(output)
apiVersion: "kafka.jikkou.io/v1"
kind: "KafkaUser"
metadata:
  name: "Bob"
  labels: {}
  annotations:
    kafka.jikkou.io/cluster-id: "xtzWWN4bTjitpL3kfd9s5g"
spec:
  authentications:
    - type: "scram-sha-256"
      iterations: 8192
    - type: "scram-sha-512"
      iterations: 8192
2.2.5 - Kafka Authorizations
KafkaPrincipalAuthorization resources are used to define Access Control Lists (ACLs) for principals authenticated to your Kafka Cluster.
Jikkou can be used to describe all ACL policies that need to be created on Kafka Cluster
KafkaPrincipalAuthorization
Specification
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
  name: "User:Alice"
spec:
  roles: [ ]                     # List of roles to be added to the principal (optional)
  acls:                          # List of KafkaPrincipalACL (required)
    - resource:
        type: <The type of the resource>                            #  (required)
        pattern: <The pattern to be used for matching resources>    #  (required) 
        patternType: <The pattern type>                             #  (required) 
      type: <The type of this ACL>     # ALLOW or DENY (required) 
      operations: [ ]                  # Operation that will be allowed or denied (required) 
      host: <HOST>                     # IP address from which principal will have access or will be denied (optional)
For more information on how to define authorization and ACLs, see the official Apache Kafka documentation: Security
Operations
The list below describes the valid values for the spec.acls.[].operations property :
- READ
- WRITE
- CERATE
- DELETE
- ALTER
- DESCRIBE
- CLUSTER_ACTION
- DESCRIBE_CONFIGS
- ALTER_CONFIGS
- IDEMPOTENT_WRITE
- CREATE_TOKEN
- DESCRIBE_TOKENS
- ALL
For more information see official Apache Kafka documentation: Operations in Kafka
Resource Types
The list below describes the valid values for the spec.acls.[].resource.type property :
- TOPIC
- GROUP
- CLUSTER
- USER
- TRANSACTIONAL_ID
For more information see official Apache Kafka documentation: Resources in Kafka
Pattern Types
The list below describes the valid values for the spec.acls.[].resource.patternType property :
- LITERAL: Use to allow or denied a principal to have access to a specific resource name.
- MATCH: Use to allow or denied a principal to have access to all resources matching the given regex.
- PREFIXED: Use to allow or denied a principal to have access to all resources having the given prefix.
Example
---
apiVersion: "kafka.jikkou.io/v1beta2"    # The api version (required)
kind: "KafkaPrincipalAuthorization"      # The resource kind (required)
metadata:
  name: "User:Alice"
spec:
  acls:
    - resource:
        type: 'topic'
        pattern: 'my-topic-'
        patternType: 'PREFIXED'
      type: "ALLOW"
      operations: [ 'READ', 'WRITE' ]
      host: "*"
    - resource:
        type: 'topic'
        pattern: 'my-other-topic-.*'
        patternType: 'MATCH'
      type: 'ALLOW'
      operations: [ 'READ' ]
      host: "*"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
  name: "User:Bob"
spec:
  acls:
    - resource:
        type: 'topic'
        pattern: 'my-topic-'
        patternType: 'PREFIXED'
      type: 'ALLOW'
      operations: [ 'READ', 'WRITE' ]
      host: "*"
KafkaPrincipalRole
Specification
apiVersion: "kafka.jikkou.io/v1beta2"    # The api version (required)
kind: "KafkaPrincipalRole"               # The resource kind (required)
metadata:
  name: <Name of role>                   # The name of the role (required)  
spec:
acls: [ ]                                # A list of KafkaPrincipalACL (required)
Example
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalRole"
metadata:
  name: "KafkaTopicPublicRead"
spec:
  acls:
    - type: "ALLOW"
      operations: [ 'READ' ]
      resource:
        type: 'topic'
        pattern: '/public-([.-])*/'
        patternType: 'MATCH'
      host: "*"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalRole"
metadata:
  name: "KafkaTopicPublicWrite"
spec:
  acls:
    - type: "ALLOW"
      operations: [ 'WRITE' ]
      resource:
        type: 'topic'
        pattern: '/public-([.-])*/'
        patternType: 'MATCH'
      host: "*"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
  name: "User:Alice"
spec:
  roles:
    - "KafkaTopicPublicRead"
    - "KafkaTopicPublicWrite"
---
apiVersion: "kafka.jikkou.io/v1beta2"
kind: "KafkaPrincipalAuthorization"
metadata:
  name: "User:Bob"
spec:
  roles:
    - "KafkaTopicPublicRead"
2.2.6 - Kafka Quotas
KafkaClientQuota resources are used to define the quota limits to be applied on Kafka consumers and producers.
A KafkaClientQuota resource can be used to apply limit to consumers and/or producers identified by a client-id or a
user principal.
KafkaClientQuota
Specification
Here is the resource definition file for defining a KafkaClientQuota.
apiVersion: "kafka.jikkou.io/v1beta2" # The api version (required)
kind: "KafkaClientQuota"              # The resource kind (required)
metadata: # (optional)
  labels: { }
  annotations: { }
spec:
  type: <The quota type> # (required)       
  entity:
    clientId: <The id of the client>    # (required depending on the quota type).
    user: <The principal of the user>   # (required depending on the quota type).
  configs:
    requestPercentage: <The quota in percentage (%) of total requests>      # (optional)
    producerByteRate: <The quota in bytes for restricting data production>  # (optional)
    consumerByteRate: <The quota in bytes for restricting data consumption> # (optional)
Quota Types
The list below describes the supported quota types:
- USERS_DEFAULT: Set default quotas for all users.
- USER: Set quotas for a specific user principal.
- USER_CLIENT: Set quotas for a specific user principal and a specific client-id.
- USER_ALL_CLIENTS: Set default quotas for a specific user and all clients.
- CLIENT: Set default quotas for a specific client.
- CLIENTS_DEFAULT: Set default quotas for all clients.
Example
Here is a simple example that shows how to define a single YAML file containing two quota definitions using
the KafkaClientQuota resource type.
file: kafka-quotas.yaml
---
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaClientQuota'
metadata:
  labels: { }
  annotations: { }
spec:
  type: 'CLIENT'
  entity:
    clientId: 'my-client'
  configs:
    requestPercentage: 10
    producerByteRate: 1024
    consumerByteRate: 1024
---
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaClientQuota'
metadata:
  labels: { }
  annotations: { }
spec:
  type: 'USER'
  entity:
    user: 'my-user'
  configs:
    requestPercentage: 10
    producerByteRate: 1024
    consumerByteRate: 1024
KafkaClientQuotaList
If you need to define multiple topics (e.g. using a template), it may be easier to use a KafkaClientQuotaList
resource.
Specification
Here the resource definition file for defining a KafkaTopicList.
apiVersion: "kafka.jikkou.io/v1beta2"  # The api version (required)
kind: "KafkaClientQuotaList"           # The resource kind (required)
metadata: # (optional)
  name: <The name of the topic>
  labels: { }
  annotations: { }
items: [ ]                              # An array of KafkaClientQuota
Example
Here is a simple example that shows how to define a single YAML file containing two KafkaClientQuota definition using
the KafkaClientQuotaList resource type.
apiVersion: 'kafka.jikkou.io/v1beta2'
kind: 'KafkaClientQuotaList'
metadata:
  labels: { }
  annotations: { }
items:
  - spec:
    type: 'CLIENT'
    entity:
      clientId: 'my-client'
    configs:
      requestPercentage: 10
      producerByteRate: 1024
      consumerByteRate: 1024
  - spec:
      type: 'USER'
      entity:
        user: 'my-user'
      configs:
        requestPercentage: 10
        producerByteRate: 1024
        consumerByteRate: 1024
2.2.7 - Kafka Table Records
A KafkaTableRecord resource can be used to produce a key/value record into a given compacted topic, i.e., a topic
with cleanup.policy=compact (a.k.a. KTable).
KafkaTableRecord
Specification
Here is the resource definition file for defining a KafkaTableRecord.
apiVersion: "kafka.jikkou.io/v1beta1" # The api version (required)
kind: "KafkaTableRecord"              # The resource kind (required)
metadata:
  labels: { }
  annotations: { }
spec:
  type: <string>      # The topic name (required)        
  headers: # The list of headers
    - name: <string>
      value: <string>
  key: # The record-key (required)
    type: <string>                   # The record-key type. Must be one of: BINARY, STRING, JSON (required)
    data: # The record-key in JSON serialized form.
      $ref: <url or path>            # Or an url to a local file containing the JSON string value.
  value: # The record-value (required)
    type: <string>                   # The record-value type. Must be one of: BINARY, STRING, JSON (required)
    data: # The record-value in JSON serialized form.
      $ref: <url or path>            # Or an url to a local file containing the JSON string value.
Usage
The KafkaTableRecord resource has been designed primarily to manage reference data published and shared via Kafka.
Therefore, it
is highly recommended to use this resource only with compacted Kafka topics containing a small amount of data.
Examples
Here are some examples that show how to a KafkaTableRecord using the different supported data type.
STRING:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaTableRecord"
spec:
  topic: "my-topic"
  headers:
    - name: "content-type"
      value: "application/text"
  key:
    type: STRING
    data: |
      "bar"      
  value:
    type: STRING
    data: |
      "foo"      
JSON:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaTableRecord"
spec:
  topic: "my-topic"
  headers:
    - name: "content-type"
      value: "application/text"
  key:
    type: STRING
    data: |
      "bar"      
  value:
    type: JSON
    data: |
      {
        "foo": "bar"
      }      
BINARY:
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaTableRecord"
spec:
  topic: "my-topic"
  headers:
    - name: "content-type"
      value: "application/text"
  key:
    type: STRING
    data: |
      "bar"      
  value:
    type: BINARY
    data: |
      "eyJmb28iOiAiYmFyIn0K"      
2.3 - Transformations
Here, you will find information to use the built-in transformations for Apache Kafka resources.
More information:
2.3.1 - KafkaTopicMaxNumPartitions
This transformation can be used to enforce a maximum value for the number of partitions of kafka topics.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| maxNumPartitions | Int | maximum value for the number of partitions to be used for Kafka Topics | 
Example
jikkou {
  transformations: [
    {
      type = io.streamthoughts.jikkou.kafka.transform.KafkaTopicMaxNumPartitions
      priority = 100
      config = {
        maxNumPartitions = 50
      }
    }
  ]
}
2.3.2 - KafkaTopicMaxRetentionMs
This transformation can be used to enforce a maximum value for the retention.ms property of kafka topics.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| maxRetentionMs | Int | Minimum value of retention.msto be used for Kafka Topics | 
Example
jikkou {
  transformations: [
    {
      type = io.streamthoughts.jikkou.kafka.transform.KafkaTopicMinRetentionMsTransformation
      priority = 100
      config = {
        maxRetentionMs = 2592000000 # 30 days
      }
    }
  ]
}
2.3.3 - KafkaTopicMinInSyncReplicas
This transformation can be used to enforce a minimum value for the min.insync.replicas property of kafka topics.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| minInSyncReplicas | Int | Minimum value of min.insync.replicasto be used for Kafka Topics | 
Example
jikkou {
  transformations: [
    {
      type = io.streamthoughts.jikkou.kafka.transform.KafkaTopicMinInSyncReplicasTransformation
      priority = 100
      config = {
        minInSyncReplicas = 2
      }
    }
  ]
}
2.3.4 - KafkaTopicMinReplicas
This transformation can be used to enforce a minimum value for the replication factor of kafka topics.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| minReplicationFactor | Int | Minimum value of replication factor to be used for Kafka Topics | 
Example
jikkou {
  transformations: [
    {
      type = io.streamthoughts.jikkou.kafka.transform.KafkaTopicMinReplicasTransformation
      priority = 100
      config = {
        minReplicationFactor = 3
      }
    }
  ]
}
2.3.5 - KafkaTopicMinRetentionMs
This transformation can be used to enforce a minimum value for the retention.ms property of kafka topics.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| minRetentionMs | Int | Minimum value of retention.msto be used for Kafka Topics | 
Example
jikkou {
  transformations: [
    {
      type = io.streamthoughts.jikkou.kafka.transform.KafkaTopicMinRetentionMsTransformation
      priority = 100
      config = {
        minRetentionMs = 604800000 # 7 days
      }
    }
  ]
}
2.4 - Validations
Jikkou ships with the following built-in validations:
Topics
NoDuplicateTopicsAllowedValidation
(auto registered)
TopicConfigKeysValidation
(auto registered)
type = io.streamthoughts.jikkou.kafka.validation.TopicConfigKeysValidation
The TopicConfigKeysValidation allows checking if the specified topic configs are all valid.
TopicMinNumPartitions
type = io.streamthoughts.jikkou.kafka.validation.TopicMinNumPartitionsValidation
The TopicMinNumPartitions allows checking if the specified number of partitions for a topic is not less than the minimum required.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| topicMinNumPartitions | Int | Minimum number of partitions allowed | 
TopicMaxNumPartitions
type = io.streamthoughts.jikkou.kafka.validation.TopicMaxNumPartitions
The TopicMaxNumPartitions allows checking if the number of partitions for a topic is not greater than the maximum configured.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| topicMaxNumPartitions | Int | Maximum number of partitions allowed | 
TopicMinReplicationFactor
type = io.streamthoughts.jikkou.kafka.validation.TopicMinReplicationFactor
The TopicMinReplicationFactor allows checking if the specified replication factor for a topic is not less than the minimum required.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| topicMinReplicationFactor | Int | Minimum replication factor allowed | 
TopicMaxReplicationFactor
type = io.streamthoughts.jikkou.kafka.validation.TopicMaxReplicationFactor
The TopicMaxReplicationFactor allows checking if the specified replication factor for a topic is not greater than the maximum configured.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| topicMaxReplicationFactor | Int | Maximum replication factor allowed | 
TopicNamePrefix
type = io.streamthoughts.jikkou.kafka.validation.TopicNamePrefix
The TopicNamePrefix allows checking if the specified name for a topic starts with one of the configured suffixes.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| topicNamePrefixes | List | List of topic name prefixes allows | 
TopicNameSuffix
type = io.streamthoughts.jikkou.kafka.validation.TopicNameSuffix
The TopicNameSuffix allows checking if the specified name for a topic ends with one of the configured suffixes.
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| topicNameSuffixes | List | List of topic name suffixes allows | 
ACLs
NoDuplicateUsersAllowedValidation
(auto registered)
NoDuplicateRolesAllowedValidation
(auto registered)
Quotas
QuotasEntityValidation
(auto registered)
2.5 - Annotations
Here, you will find information about the annotations provided the Apache Kafka extension for Jikkou.
List of built-in annotations
kafka.jikkou.io/cluster-id
Used by jikkou.
The annotation is automatically added by Jikkou to a describe object part of an Apache Kafka cluster.
2.6 - Actions
Here, you will find the list of actions provided by the Extension Provider for Apache Kafka.
Apache Kafka Action
More information:
2.6.1 - KafkaConsumerGroupsResetOffsets
The KafkaConsumerGroupsResetOffsets action allows resetting offsets of consumer group.
It supports one consumer group at the time, and group should be in EMPTY state.
Usage (CLI)
Usage:
Execute the action.
jikkou action KafkaConsumerGroupsResetOffsets execute [-hV] [--all] [--dry-run]
[--to-earliest] [--to-latest] [--group=PARAM] [--logger-level=<level>]
[-o=<format>] [--to-datetime=PARAM] [--to-offset=PARAM] [--excludes=PARAM]...
[--groups=PARAM]... [--includes=PARAM]... --topic=PARAM [--topic=PARAM]...
DESCRIPTION:
Reset offsets of consumer group. Supports multiple consumer groups, and groups
should be in EMPTY state.
You must choose one of the following reset specifications: to-datetime,
by-duration, to-earliest, to-latest, to-offset.
OPTIONS:
      --all                 Specifies to act on all consumer groups.
      --dry-run             Only show results without executing changes on
                              Consumer Groups.
      --excludes=PARAM      List of patterns to match the consumer groups that
                              must be excluded from the reset-offset action.
      --group=PARAM         The consumer group to act on.
      --groups=PARAM        The consumer groups to act on.
  -h, --help                Show this help message and exit.
      --includes=PARAM      List of patterns to match the consumer groups that
                              must be included in the reset-offset action.
      --logger-level=<level>
                            Specify the log level verbosity to be used while
                              running a command.
                            Valid level values are: TRACE, DEBUG, INFO, WARN,
                              ERROR.
                            For example, `--logger-level=INFO`
  -o, --output=<format>     Prints the output in the specified format. Allowed
                              values: JSON, YAML (default YAML).
      --to-datetime=PARAM   Reset offsets to offset from datetime. Format:
                              'YYYY-MM-DDTHH:mm:SS.sss'
      --to-earliest         Reset offsets to earliest offset.
      --to-latest           Reset offsets to latest offset.
      --to-offset=PARAM     Reset offsets to a specific offset.
      --topic=PARAM         The topic whose partitions must be included in the
                              reset-offset action.
  -V, --version             Print version information and exit.
Examples
Reset Single Consumer Group to the earliest offsets
jikkou action kafkaconsumergroupresetoffsets execute \
--group my-group \
--topic test \
--to-earliest
(output)
---
kind: "ApiActionResultSet"
apiVersion: "core.jikkou.io/v1"
metadata:
  labels: {}
  annotations:
    configs.jikkou.io/to-earliest: "true"
    configs.jikkou.io/group: "my-group"
    configs.jikkou.io/dry-run: "false"
    configs.jikkou.io/topic: 
        - "test"
results:
- status: "SUCCEEDED"
  errors: []
  data:
    apiVersion: "kafka.jikkou.io/v1beta1"
    kind: "KafkaConsumerGroup"
    metadata:
      name: "my-group"
      labels:
        kafka.jikkou.io/is-simple-consumer: false
      annotations: {}
    status:
      state: "EMPTY"
      members: []
      offsets:
      - topic: "test"
        partition: 1
        offset: 0
      - topic: "test"
        partition: 0
        offset: 0
      - topic: "test"
        partition: 2
        offset: 0
      - topic: "--test"
        partition: 0
        offset: 0
      coordinator:
        id: "101"
        host: "localhost"
        port: 9092
Reset All Consumer Groups to the earliest offsets
jikkou action kafkaconsumergroupresetoffsets execute \
--all \
--topic test \
--to-earliest
2.7 - Compatibility
The Apache Kafka extension for Jikkou utilizes the Kafka Admin Client which is compatible with any Kafka infrastructure, such as :
- Aiven
- Apache Kafka
- Confluent Cloud
- MSK
- Redpanda
- etc.
In addition, Kafka Protocol has a “bidirectional” client compatibility policy. In other words, new clients can talk to old servers, and old clients can talk to new servers.
3 - Apache Kafka Connect
Here, you will find information to use the Kafka Connect extension for Jikkou.
More information:
3.1 - Configuration
This section describes how to configure the Kafka Connect extension.
Configuration
You can configure the properties to be used to connect the Kafka Connect cluster
through the Jikkou client configuration property: jikkou.provider.kafkaconnect.
Example:
jikkou {
  provider.kafkaconnect {
    enabled: true
    type: io.streamthoughts.jikkou.kafka.connect.KafkaConnectExtensionProvider
    # Array of Kafka Connect clusters configurations.
    clusters = [
      {
        # Name of the cluster (e.g., dev, staging, production, etc.)
        name = "locahost"
        # URL of the Kafka Connect service
        url = "http://localhost:8083"
        # Method to use for authenticating on Kafka Connect. Available values are: [none, basicauth, ssl]
        authMethod = none
        # Use when 'authMethod' is 'basicauth' to specify the username for Authorization Basic header
        basicAuthUser = null
        # Use when 'authMethod' is 'basicauth' to specify the password for Authorization Basic header
        basicAuthPassword = null
        # Enable debug logging
        debugLoggingEnabled = false
  
        # Ssl Config: Use when 'authMethod' is 'ssl'
        # The location of the key store file.
        sslKeyStoreLocation = "/certs/registry.keystore.jks"
        # The file format of the key store file.
        sslKeyStoreType = "JKS"
        # The password for the key store file.
        sslKeyStorePassword = "password"
        # The password of the private key in the key store file.
        sslKeyPassword = "password"
        # The location of the trust store file.
        sslTrustStoreLocation = "/certs/registry.truststore.jks"
        # The file format of the trust store file.
        sslTrustStoreType = "JKS"
        # The password for the trust store file.
        sslTrustStorePassword = "password"
        # Specifies whether to ignore the hostname verification.
        sslIgnoreHostnameVerification = true
      }
    ]
  }
}
3.2 - Resources
Here, you will find the list of resources supported by the Kafka Connect Extension.
Kafka Connect Resources
More information:
3.2.1 - KafkaConnectors
This section describes the resource definition format for KafkaConnector entities, which can be used to define the
configuration and status of connectors you plan to create and manage on specific Kafka Connect clusters.
Definition Format of KafkaConnector
Below is the overall structure of the KafkaConnector resource.
---
apiVersion: "kafka.jikkou.io/v1beta1"  # The api version (required)
kind: "KafkaConnector"                 # The resource kind (required)
metadata:
  name: <string>                       # The name of the connector (required)
  labels:
    # Name of the Kafka Connect cluster to create the connector instance in (required).
    kafka.jikkou.io/connect-cluster: <string>
  annotations:
    # Override client properties to connect to Kafka Connect cluster (optional).
    jikkou.io/config-override: | 
      <json>
spec:
  connectorClass: <string>            # Name or alias of the class for this connector.
  tasksMax: <integer>                 # The maximum number of tasks for the Kafka Connector.
  config:                             # Configuration properties of the connector.
    <key>: <value>
  state: <string>                     # The state the connector should be in. Defaults to running.
See below for details about all these fields.
Metadata
metadata.name [required]
The name of the connector.
labels.kafka.jikkou.io/connect-cluster [required]
The name of the Kafka Connect cluster to create the connector instance in.
The cluster name must be configured through the kafkaConnect.clusters[] Jikkou’s configuration setting (see: Configuration).
jikkou.io/config-override: [optional]
The JSON client configurations to override for connecting to the Kafka Connect cluster. The configuration properties passed through this annotation override any cluster properties defined in the Jikkou’s configuration setting (see: Configuration).
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
  name: "my-connector"
  labels:
    kafka.jikkou.io/connect-cluster: "my-connect-cluster"
  annotations:
    jikkou.io/config-override: |
      { "url": "http://localhost:8083" }      
Specification
spec.connectorClass [required]
The name or alias of the class for this connector.
spec.tasksMax [optional]
The maximum number of tasks for the Kafka Connector. Default is 1.
spec.config [required]
The connector’s configuration properties.
spec.state [optional]
The state the connector should be in. Defaults to running.
Below are the valid values:
- running: Transition the connector and its tasks to RUNNING state.
- paused: Pause the connector and its tasks, which stops message processing until the connector is resumed.
- stopped: Completely shut down the connector and its tasks. The connector config remains present in the config topic of the cluster (if running in distributed mode), unmodified.
Examples
The following is an example of a resource describing a Kafka connector:
---
# Example: file: kafka-connector-filestream-sink.yaml
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
  name: "local-file-sink"
  labels:
    kafka.jikkou.io/connect-cluster: "my-connect-cluster"
spec:
  connectorClass: "FileStreamSink"
  tasksMax: 1
  config:
    file: "/tmp/test.sink.txt"
    topics: "connect-test"
  state: "RUNNING"
Listing KafkaConnector
You can retrieve the state of Kafka Connector instances running on your Kafka Connect clusters using the jikkou get kafkaconnectors (or jikkou get kc) command.
Usage
$jikkou get kc --help
Usage:
Get all 'KafkaConnector' resources.
jikkou get kafkaconnectors [-hV] [--expand-status] [-o=<format>]
                           [-s=<expressions>]...
Description:
Use jikkou get kafkaconnectors when you want to describe the state of all
resources of type 'KafkaConnector'.
Options:
      --expand-status     Retrieves additional information about the status of
                            the connector and its tasks.
  -h, --help              Show this help message and exit.
  -o, --output=<format>   Prints the output in the specified format. Allowed
                            values: json, yaml (default yaml).
  -s, --selector=<expressions>
                          The selector expression use for including or
                            excluding resources.
  -V, --version           Print version information and exit.
(The output from your current Jikkou version may be different from the above example.)
Examples
(command)
$ jikkou get kc --expand-status 
(output)
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
  name: "local-file-sink"
  labels:
    kafka.jikkou.io/connect-cluster: "localhost"
spec:
  connectorClass: "FileStreamSink"
  tasksMax: 1
  config:
    file: "/tmp/test.sink.txt"
    topics: "connect-test"
  state: "RUNNING"
status:
  connectorStatus:
    name: "local-file-sink"
    connector:
      state: "RUNNING"
      worker_id: "localhost:8083"
    tasks:
      id: 1
      state: "RUNNING"
      worker_id: "localhost:8083"
The status.connectorStatus provides the connector status, as reported by the Kafka Connect REST API.
3.3 - Validations
Jikkou ships with the following built-in validations:
3.4 - Annotations
This section lists a number of well known annotations, that have defined semantics. They can be attached
to KafkaConnect resources through the metadata.annotations field and consumed as needed by extensions (i.e., validations, transformations, controller,
collector, etc.).
List of built-in annotations
3.5 - Labels
This section lists a number of well known labels, that have defined semantics. They can be attached
to KafkaConnect resources through the metadata.labels field and consumed as needed by extensions (i.e., validations, transformations, controller,
collector, etc.).
Labels
kafka.jikkou.io/connect-cluster
# Example
---
apiVersion: "kafka.jikkou.io/v1beta1"
kind: "KafkaConnector"
metadata:
  labels:
    kafka.jikkou.io/connect-cluster: 'my-connect-cluster'
The value of this label defined the name of the Kafka Connect cluster to create the connector instance in.
The cluster name must be configured through the kafkaConnect.clusters[] Jikkou’s configuration setting (see: Configuration).
3.6 - Actions
Here, you will find the list of actions provided by the Extension Provider for Kafka Connect.
Kafka Connect Action
More information:
3.6.1 - KafkaConnectRestartConnectors
The KafkaConnectRestartConnectors action allows a user to restart all or just the
failed Connector and Task instances for one or multiple named connectors.
Usage (CLI)
Usage:
Execute the action.
jikkou action KafkaConnectRestartConnectors execute [-hV] [--include-tasks]
[--only-failed] [--connect-cluster=PARAM] [--logger-level=<level>]
[-o=<format>] [--connector-name=PARAM]...
DESCRIPTION:
The KafkaConnectRestartConnectors action a user to restart all or just the
failed Connector and Task instances for one or multiple named connectors.
OPTIONS:
      --connect-cluster=PARAM
                          The name of the connect cluster.
      --connector-name=PARAM
                          The connector's name.
  -h, --help              Show this help message and exit.
      --include-tasks     Specifies whether to restart the connector instance
                            and task instances (includeTasks=true) or just the
                            connector instance (includeTasks=false)
      --logger-level=<level>
                          Specify the log level verbosity to be used while
                            running a command.
                          Valid level values are: TRACE, DEBUG, INFO, WARN,
                            ERROR.
                          For example, `--logger-level=INFO`
  -o, --output=<format>   Prints the output in the specified format. Allowed
                            values: JSON, YAML (default YAML).
      --only-failed       Specifies whether to restart just the instances with
                            a FAILED status (onlyFailed=true) or all instances
                            (onlyFailed=false)
  -V, --version           Print version information and exit.
Examples
Restart all connectors for all Kafka Connect clusters.
jikkou action kafkaconnectrestartconnectors execute
(output)
---
kind: "ApiActionResultSet"
apiVersion: "core.jikkou.io/v1"
metadata:
  labels: {}
  annotations: {}
results:
- status: "SUCCEEDED"
  data:
    apiVersion: "kafka.jikkou.io/v1beta1"
    kind: "KafkaConnector"
    metadata:
      name: "local-file-sink"
      labels:
        kafka.jikkou.io/connect-cluster: "my-connect-cluster"
      annotations: {}
    spec:
      connectorClass: "FileStreamSink"
      tasksMax: 1
      config:
        file: "/tmp/test.sink.txt"
        topics: "connect-test"
      state: "RUNNING"
    status:
      connectorStatus:
        name: "local-file-sink"
        connector:
          state: "RUNNING"
          workerId: "connect:8083"
        tasks:
        - id: 0
          state: "RUNNING"
          workerId: "connect:8083"
Restart all connectors with a FAILED status on all Kafka Connect clusters.
jikkou action kafkaconnectrestartconnectors execute \
--only-failed
Restart specific connector and tasks for on Kafka Connect cluster
jikkou action kafkaconnectrestartconnectors execute \
--cluster-name my-connect-cluster
--connector-name local-file-sink \
--include-tasks
4 - Schema Registry
Here, you will find information to use the Schema Registry extensions.
More information:
4.1 - Configuration
Here, you will find the list of resources supported for SchemaRegistry.
Configuration
You can configure the properties to be used to connect the SchemaRegistry service
through the Jikkou client configuration property jikkou.provider.schemaregistry.
Example:
jikkou {
  provider.schemaregistry {
    enabled = true
    type = io.streamthoughts.jikkou.schema.registry.SchemaRegistryExtensionProvider
    config = {
      # Comma-separated list of URLs for schema registry instances that can be used to register or look up schemas
      url = "http://localhost:8081"
      # The name of the schema registry implementation vendor - can be any value
      vendor = generic
      # Method to use for authenticating on Schema Registry. Available values are: [none, basicauth, ssl]
      authMethod = none
      # Use when 'schemaRegistry.authMethod' is 'basicauth' to specify the username for Authorization Basic header
      basicAuthUser = null
      # Use when 'schemaRegistry.authMethod' is 'basicauth' to specify the password for Authorization Basic header
      basicAuthPassword = null
      # Enable debug logging
      debugLoggingEnabled = false
      # Ssl Config: Use when 'authMethod' is 'ssl'
      # The location of the key store file.
      sslKeyStoreLocation = "/certs/registry.keystore.jks"
      # The file format of the key store file.
      sslKeyStoreType = "JKS"
      # The password for the key store file.
      sslKeyStorePassword = "password"
      # The password of the private key in the key store file.
      sslKeyPassword = "password"
      # The location of the trust store file.
      sslTrustStoreLocation = "/certs/registry.truststore.jks"
      # The file format of the trust store file.
      sslTrustStoreType = "JKS"
      # The password for the trust store file.
      sslTrustStorePassword = "password"
      # Specifies whether to ignore the hostname verification.
      sslIgnoreHostnameVerification = true
    }
  }
}
4.2 - Resources
Here, you will find the list of resources supported for Schema Registry.
Schema Registry Resources
More information:
4.2.1 - Schema Registry Subjects
SchemaRegistrySubject resources are used to define the subject schemas you want to manage on your SchemaRegistry. A SchemaRegistrySubject resource defines the schema, the compatibility level, and the references to be associated with a subject version.
Definition Format of SchemaRegistrySubject
Below is the overall structure of the SchemaRegistrySubject resource.
apiVersion: "schemaregistry.jikkou.io/v1beta2"  # The api version (required)
kind: "SchemaRegistrySubject"                   # The resource kind (required)
metadata:
  name: <The name of the subject>               # (required)
  labels: { }
  annotations: { }
spec:
  schemaRegistry:
    vendor: <vendor_name>                       # (optional) The vendor of the SchemaRegistry, e.g., Confluent, Karapace, etc
  compatibilityLevel: <compatibility_level>     # (optional) The schema compatibility level for this subject.
  schemaType: <The schema format>               # (required) Accepted values are: AVRO, PROTOBUF, JSON
  schema:
    $ref: <url or path>  # 
  references: # Specifies the names of referenced schemas (optional array).
    - name: <string>                             # The name for the reference.
      subject: <string>                          # The subject under which the referenced schema is registered.
      version: <string>                          # The exact version of the schema under the registered subject.
]
Metadata
The metadata.name property is mandatory and specifies the name of the Subject.
Specification
To use the SchemaRegistry default values for the compatibilityLevel you can omit the property.
Example
Here is a simple example that shows how to define a single subject AVRO schema for type using
the SchemaRegistrySubject resource type.
file: subject-user.yaml
---
apiVersion: "schemaregistry.jikkou.io/v1beta2"
kind: "SchemaRegistrySubject"
metadata:
  name: "User"
  labels: { }
  annotations:
    schemaregistry.jikkou.io/normalize-schema: true
spec:
  compatibilityLevel: "FULL_TRANSITIVE"
  schemaType: "AVRO"
  schema:
    $ref: ./user-schema.avsc
file: user-schema.avsc
---
{
  "namespace": "example.avro",
  "type": "record",
  "name": "User",
  "fields": [
    {
      "name": "name",
      "type": [ "null", "string" ],
      "default": null,
    },
    {
      "name": "favorite_number",
      "type": [ "null", "int" ],
      "default": null
    },
    {
      "name": "favorite_color",
      "type": [ "null", "string" ],
      "default": null
    }
  ]
}
Alternatively, we can directly pass the Avro schema as follows :
file: subject-user.yaml
---
apiVersion: "schemaregistry.jikkou.io/v1beta2"
kind: "SchemaRegistrySubject"
metadata:
  name: "User"
  labels: { }
  annotations:
    schemaregistry.jikkou.io/normalize-schema: true
spec:
  compatibilityLevel: "FULL_TRANSITIVE"
  schemaType: "AVRO"
  schema: |
    {
      "namespace": "example.avro",
      "type": "record",
      "name": "User",
      "fields": [
        {
          "name": "name",
          "type": [ "null", "string" ],
          "default": null
        },
        {
          "name": "favorite_number",
          "type": [ "null", "int" ],
          "default": null
        },
        {
          "name": "favorite_color",
          "type": [ "null", "string"],
          "default": null
        }
      ]
    }    
SchemaRegistrySubjectList
If you need to manage multiple Schemas at once (e.g. using a template), it may be more suitable to use the resource collection SchemaRegistrySubjectList.
Specification
Here the resource definition file for defining a SchemaRegistrySubjectList.
apiVersion: "schemaregistry.jikkou.io/v1beta2"  # The api version (required)
kind: "SchemaRegistrySubjectList"      # The resource kind (required)
metadata: # (optional)
  labels: { }
  annotations: { }
items: [ ]                             # The array of SchemaRegistrySubject
4.3 - Validations
Jikkou ships with the following built-in validations:
Subject
SchemaCompatibilityValidation
type = io.streamthoughts.jikkou.schema.registry.validation.SchemaCompatibilityValidation
The SchemaCompatibilityValidation allows testing the compatibility of the schema with the latest
version already registered in the Schema Registry using the provided compatibility-level.
AvroSchemaValidation
The AvroSchemaValidation allows checking if the specified Avro schema matches some specific avro schema definition
rules;
type = io.streamthoughts.jikkou.schema.registry.validation.AvroSchemaValidation
Configuration
| Name | Type | Description | Default | 
|---|---|---|---|
| fieldsMustHaveDoc | Boolean | Verify that all record fields have a docproperty | false | 
| fieldsMustBeNullable | Boolean | Verify that all record fields are nullable | false | 
| fieldsMustBeOptional | Boolean | Verify that all record fields are optional | false | 
4.4 - Annotations
Here, you will find information about the annotations provided by the Schema Registry extension for Jikkou.
List of built-in annotations
schemaregistry.jikkou.io/url
Used by jikkou.
The annotation is automatically added by Jikkou to describe the SchemaRegistry URL from which a subject schema is retrieved.
schemaregistry.jikkou.io/schema-version
Used by jikkou.
The annotation is automatically added by Jikkou to describe the version of a subject schema.
schemaregistry.jikkou.io/schema-id
Used by jikkou.
The annotation is automatically added by Jikkou to describe the version of a subject id.
schemaregistry.jikkou.io/normalize-schema
Used on: schemaregistry.jikkou.io/v1beta2:SchemaRegistrySubject
This annotation can be used to normalize the schema on SchemaRegistry server side. Note, that Jikkou will attempt to normalize AVRO and JSON schema.
See: Confluent SchemaRegistry API Reference
schemaregistry.jikkou.io/permanante-delete
Used on: schemaregistry.jikkou.io/v1beta2:SchemaRegistrySubject
The annotation can be used to specify a hard delete of the subject, which removes all associated metadata including the schema ID. The default is false. If the flag is not included, a soft delete is performed. You must perform a soft delete first, then the hard delete.
See: Confluent SchemaRegistry API Reference
schemaregistry.jikkou.io/use-canonical-fingerprint
This annotation can be used to use a canonical fingerprint to compare schemas (only supported for Avro schema).
5 - Aiven
Here, you will find information to use the Aiven for Kafka extensions.
More information:
5.1 - Configuration
Here, you will find the list of resources supported by the extension for Aiven.
Configuration
You can configure the properties to be used to connect the Aiven service
through the Jikkou client configuration property jikkou.provider.aiven.
Example:
jikkou {
  provider.aiven {
    enabled = true
    type = io.streamthoughts.jikkou.extension.aiven.AivenExtensionProvider
    config = {
      # Aiven project name
      project = "http://localhost:8081"
      # Aiven service name
      service = generic
      # URL to the Aiven REST API.
      apiUrl = "https://api.aiven.io/v1/"
      # Aiven Bearer Token. Tokens can be obtained from your Aiven profile page
      tokenAuth = null
      # Enable debug logging
      debugLoggingEnabled = false
    }
  }
}
5.2 - Resources
Here, you will find the list of resources supported by the extensions for Aiven.
Aiven for Apache Kafka Resources
More information:
5.2.1 - ACL for Aiven Apache Kafka®
The KafkaTopicAclEntry resources are used to manage the Access Control Lists in Aiven for Apache Kafka®. A
KafkaTopicAclEntry resource defines the permission to be granted to a user for one or more kafka topics.
KafkaTopicAclEntry
Specification
Here is the resource definition file for defining a KafkaTopicAclEntry.
---
apiVersion: "kafka.aiven.io/v1beta1"   # The api version (required)
kind: "KafkaTopicAclEntry"             # The resource kind (required)
metadata:
  labels: { }
  annotations: { }
spec:
  permission: <>               # The permission. Accepted values are: READ, WRITE, READWRITE, ADMIN
  username: <>                 # The username
  topic: <>                    # Topic name or glob pattern
Example
Here is a simple example that shows how to define a single ACL entry using
the KafkaTopicAclEntry resource type.
file: kafka-topic-acl-entry.yaml
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "KafkaTopicAclEntry"
metadata:
  labels: { }
  annotations: { }
spec:
  permission: "READWRITE"
  username: "alice"
  topic: "public-*"
KafkaTopicAclEntryList
If you need to define multiple ACL entries (e.g. using a template), it may be easier to use a KafkaTopicAclEntryList resource.
Specification
Here the resource definition file for defining a KafkaTopicList.
---
apiVersion: "kafka.aiven.io/v1beta1"    # The api version (required)
kind: "KafkaTopicAclEntryList"          # The resource kind (required)
metadata: # (optional)
  name: <The name of the topic>
  labels: { }
  annotations: { }
items: [ ]                             # An array of KafkaTopicAclEntry
Example
Here is a simple example that shows how to define a single YAML file containing two ACL entry definitions using
the KafkaTopicAclEntryList resource type.
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "KafkaTopicAclEntryList"
items:
  - spec:
      permission: "READWRITE"
      username: "alice"
      topic: "public-*"
  - spec:
      permission: "READ"
      username: "bob"
      topic: "public-*"
5.2.2 - Quotas for Aiven Apache Kafka®
The KafkaQuota resources are used to manage the Quotas in Aiven for Apache Kafka® service.
For more details, see https://docs.aiven.io/docs/products/kafka/concepts/kafka-quotas
KafkaQuota
Specification
Here is the resource definition file for defining a KafkaQuota.
---
apiVersion: "kafka.aiven.io/v1beta1"   # The api version (required)
kind: "KafkaQuota"                     # The resource kind (required)
metadata:
  labels: { }
  annotations: { }
spec:
  user: <string>                     # The username: (Optional: 'default' if null)
  clientId: <string>                 # The client-id
  consumerByteRate: <number>         # The quota in bytes for restricting data consumption
  producerByteRate: <number>         # The quota in bytes for restricting data production
  requestPercentage: <number>
Example
Here is a simple example that shows how to define a single ACL entry using
the KafkaQuota resource type.
file: kafka-quotas.yaml
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "KafkaQuota"
spec:
  user: "default"
  clientId: "default"
  consumerByteRate: 1048576
  producerByteRate: 1048576
  requestPercentage: 25
KafkaQuotaList
If you need to define multiple Kafka quotas (e.g. using a template), it may be easier to use a KafkaQuotaList resource.
Specification
Here the resource definition file for defining a KafkaTopicList.
---
apiVersion: "kafka.aiven.io/v1beta1"    # The api version (required)
kind: "KafkaQuotaList"                  # The resource kind (required)
metadata: # (optional)
  labels: { }
  annotations: { }
items: [ ]                             # An array of KafkaQuotaList
Example
Here is a simple example that shows how to define a single YAML file containing two ACL entry definitions using
the KafkaQuotaList resource type.
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "KafkaQuotaList"
items:
  - spec:
      user: "default"
      clientId: "default"
      consumerByteRate: 1048576
      producerByteRate: 1048576
      requestPercentage: 5
  - spec:
      user: "avnadmin"
      consumerByteRate: 5242880
      producerByteRate: 5242880
      requestPercentage: 25
5.2.3 - ACL for Aiven Schema Registry
The SchemaRegistryAclEntry resources are used to manage the Access Control Lists in Aiven for Schema Registry. A
SchemaRegistryAclEntry resource defines the permission to be granted to a user for one or more Schema Registry
Subjects.
SchemaRegistryAclEntry
Specification
Here is the resource definition file for defining a SchemaRegistryAclEntry.
---
apiVersion: "kafka.aiven.io/v1beta1"   # The api version (required)
kind: "SchemaRegistryAclEntry"         # The resource kind (required)
metadata:
  labels: { }
  annotations: { }
spec:
  permission: <>               # The permission. Accepted values are: READ, WRITE
  username: <>                 # The username
  resource: <>                 # The Schema Registry ACL entry resource name pattern
NOTE: The resource name pattern should be Config: or Subject:<subject_name> where subject_name must consist of
alpha-numeric characters, underscores, dashes, dots and glob characters * and ?.
Example
Here is an example that shows how to define a simple ACL entry using
the SchemaRegistryAclEntry resource type.
file: schema-registry-acl-entry.yaml
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "SchemaRegistryAclEntry"
spec:
  permission: "READ"
  username: "Alice"
  resource: "Subject:*"
SchemaRegistryAclEntryList
If you need to define multiple ACL entries (e.g. using a template), it may be easier to use
a SchemaRegistryAclEntryList resource.
Specification
Here the resource definition file for defining a SchemaRegistryAclEntryList.
---
apiVersion: "kafka.aiven.io/v1beta1"    # The api version (required)
kind: "SchemaRegistryAclEntryList"      # The resource kind (required)
metadata: # (optional)
  labels: { }
  annotations: { }
items: [ ]                              # An array of SchemaRegistryAclEntry
Example
Here is a simple example that shows how to define a single YAML file containing two ACL entry definitions using
the SchemaRegistryAclEntryList resource type.
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "SchemaRegistryAclEntryList"
items:
  - spec:
      permission: "READ"
      username: "alice"
      resource: "Config:"
  - spec:
      permission: "WRITE"
      username: "alice"
      resource: "Subject:*"
5.2.4 - Subject for Aiven Schema Registry
SchemaRegistrySubject resources are used to define the subject schemas you want to manage on your Schema Registry. A SchemaRegistrySubject resource defines the schema, the compatibility level, and the references to be associated with a subject version.
SchemaRegistrySubject
Specification
Here is the resource definition file for defining a SchemaRegistrySubject.
apiVersion: "kafka.aiven.io/v1beta1"  # The api version (required)
kind: "SchemaRegistrySubject"                   # The resource kind (required)
metadata:
  name: <The name of the subject>               # (required)
  labels: { }
  annotations: { }
spec:
  schemaRegistry:
    vendor: 'Karapace'                          # (optional) The vendor of the Schema Registry
  compatibilityLevel: <compatibility_level>     # (optional) The schema compatibility level for this subject.
  schemaType: <The schema format>               # (required) Accepted values are: AVRO, PROTOBUF, JSON
  schema:
    $ref: <url or path>  # 
  references:                                   # Specifies the names of referenced schemas (optional array).
    - name: <>                                  # The name for the reference.
      subject: <>                               # The subject under which the referenced schema is registered.
      version: <>                               # The exact version of the schema under the registered subject.
]
The metadata.name property is mandatory and specifies the name of the Subject.
To use the SchemaRegistry default values for the compatibilityLevel you can omit the property.
Example
Here is a simple example that shows how to define a single subject AVRO schema for type using
the SchemaRegistrySubject resource type.
file: subject-user.yaml
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "SchemaRegistrySubject"
metadata:
  name: "User"
  labels: { }
  annotations:
    schemaregistry.jikkou.io/normalize-schema: true
spec:
  compatibilityLevel: "FULL_TRANSITIVE"
  schemaType: "AVRO"
  schema:
    $ref: ./user-schema.avsc
file: user-schema.avsc
---
{
  "namespace": "example.avro",
  "type": "record",
  "name": "User",
  "fields": [
    {
      "name": "name",
      "type": [ "null", "string" ],
      "default": null,
    },
    {
      "name": "favorite_number",
      "type": [ "null", "int" ],
      "default": null
    },
    {
      "name": "favorite_color",
      "type": [ "null", "string" ],
      "default": null
    }
  ]
}
Alternatively, we can directly pass the Avro schema as follows :
file: subject-user.yaml
---
apiVersion: "kafka.aiven.io/v1beta1"
kind: "SchemaRegistrySubject"
metadata:
  name: "User"
  labels: { }
  annotations:
    schemaregistry.jikkou.io/normalize-schema: true
spec:
  compatibilityLevel: "FULL_TRANSITIVE"
  schemaType: "AVRO"
  schema: |
    {
      "namespace": "example.avro",
      "type": "record",
      "name": "User",
      "fields": [
        {
          "name": "name",
          "type": [ "null", "string" ],
          "default": null
        },
        {
          "name": "favorite_number",
          "type": [ "null", "int" ],
          "default": null
        },
        {
          "name": "favorite_color",
          "type": [ "null", "string"],
          "default": null
        }
      ]
    }    
5.3 - Validations
Jikkou ships with the following built-in validations:
No validation
5.4 - Annotations
Here, you will find information about the annotations provided by the Aiven extension for Jikkou.
List of built-in annotations
kafka.aiven.io/acl-entry-id
Used by jikkou.
The annotation is automatically added by Jikkou to describe the ID of an ACL entry.
6 - Aws
Here, you will find information to use the Aws extensions (Jikkou v0.36).
More information:
6.1 - Configuration
Here, you will find the list of resources supported by the extension for AWS.
Configuration
You can configure the properties to be used to connect the AWS services
through the Jikkou client configuration property jikkou.providers.
Example:
jikkou {
  # AWS
  provider.aws {
    enabled = true
    config = {
      # The AWS S3 Region, e.g. us-east-1
      aws.client.region = ""
      # The AWS Access Key ID.
      aws.client.accessKeyId = ""
      # The AWS Secret Access Key.
      aws.client.secretAccessKey = ""
      # The AWS session token.
      aws.client.sessionToken = ""
      # The endpoint with which the SDK should communicate allowing you to use a different S3 compatible service
      aws.client.endpointOverride = ""
      # The name of the registries. Used only for lookup.
      aws.glue.registryNames = ""
    }
  }
}
6.2 - Resources
Here, you will find the list of resources supported by the extensions for AWS.
AWS Resources
More information:
6.2.1 - Schema for AWS Glue Schema Registry
AwsGlueSchema resources are used to define the schemas you want to manage on your AWS Glue Schema registry. A AwsGlueSchema resource defines the schema, and the compatibility mode to be associated with a subject definition.
AwsGlueSchema
Specification
Here is the resource definition file for defining a AwsGlueSchema.
apiVersion: "aws.jikkou.io/v1"                  # The api version (required)
kind: "AwsGlueSchema"                           # The resource kind (required)    
metadata:
  name: <The name of the subject>               # The schema name (required)
  labels:
    glue.aws.amazon.com/registry-name:          # The registry name (required)
  annotations: { }
spec:
  compatibility: <compatibility>                # The schema compatibility level for this subject (required).
  dataFormat: <The data format>                 # Accepted values are: AVRO, PROTOBUF, JSON (required).
  schemaDefinition:
    $ref: <url or path>  # 
]
The metadata.name property is mandatory and specifies the name of the Subject.
Compatibility
Supported compatibility mode are:
- BACKWARD(recommended) — Consumer can read both current and previous version.
- BACKWARD_ALL— Consumer can read current and all previous versions.
- FORWARD— Consumer can read both current and subsequent version.
- FORWARD_ALL— Consumer can read both current and all subsequent versions.
- FULL— Combination of Backward and Forward.
- FULL_ALL— Combination of Backward all and Forward all.
- NONE— No compatibility checks are performed.
- DISABLED— Prevent any versioning for this schema
Example
Here is a simple example that shows how to define a single subject AVRO schema for type using
the AwsGlueSchema resource type.
file: subject-user.yaml
---
apiVersion: "aws.jikkou.io/v1"
kind: "AwsGlueSchema"
metadata:
  name: "User"
  labels:
    glue.aws.amazon.com/registry-name: Test
  annotations:
    glue.aws.amazon.com/normalize-schema: true
spec:
  compatibility: "BACKWARD"
  dataFormat: "AVRO"
  schema:
    $ref: ./user-schema.avsc
file: user-schema.avsc
---
{
  "namespace": "example.avro",
  "type": "record",
  "name": "User",
  "fields": [
    {
      "name": "name",
      "type": [ "null", "string" ],
      "default": null,
    },
    {
      "name": "favorite_number",
      "type": [ "null", "int" ],
      "default": null
    },
    {
      "name": "favorite_color",
      "type": [ "null", "string" ],
      "default": null
    }
  ]
}
Alternatively, we can directly pass the Avro schema as follows :
file: subject-user.yaml
---
apiVersion: "aws.jikkou.io/v1"
kind: "AwsGlueSchema"
metadata:
  name: "User"
  labels: { }
  annotations:
    glue.aws.amazon.com/normalize-schema: true
spec:
  compatibility: "BACKWARD"
  dataFormat: "AVRO"
  schema: |
    {
      "namespace": "example.avro",
      "type": "record",
      "name": "User",
      "fields": [
        {
          "name": "name",
          "type": [ "null", "string" ],
          "default": null
        },
        {
          "name": "favorite_number",
          "type": [ "null", "int" ],
          "default": null
        },
        {
          "name": "favorite_color",
          "type": [ "null", "string"],
          "default": null
        }
      ]
    }    
6.3 - Validations
Jikkou ships with the following built-in validations:
No validation
6.4 - Annotations
Here, you will find information about the annotations provided by the AWS extension for Jikkou.
List of built-in annotations
glue.aws.amazon.com/created-time
The date and time the schema was created.
The annotation is automatically added by Jikkou.
glue.aws.amazon.com/updated-time
The date and time the schema was updated.
The annotation is automatically added by Jikkou.
glue.aws.amazon.com/registry-name
The name of the registry.
The annotation is automatically added by Jikkou.
glue.aws.amazon.com/registry-arn
The Amazon Resource Name (ARN) of the registry.
The annotation is automatically added by Jikkou.
glue.aws.amazon.com/schema-arn
The Amazon Resource Name (ARN) of the schema.
The annotation is automatically added by Jikkou.
glue.aws.amazon.com/schema-version-id
The SchemaVersionId of the schema version.
The annotation is automatically added by Jikkou.
glue.aws.amazon.com/use-canonical-fingerprint
This annotation can be used to use a canonical fingerprint to compare schemas (only supported for Avro schema).