At line 1 changed one line |
!!Kafka task |
!!!Kafka task |
At line 6 changed 3 lines |
[Download kafka-clients.jar Link|kafka-clients.jar]\\ |
[Download kafka-streams.jar Link|kafka-streams.jar]\\ |
Then restart the CrushFTP service.\\ |
[Download kafka-clients.jar version 4.0.0 Link|kafka-clients.jar]\\ |
[Download kafka-streams.jar version 4.0.0 Link|kafka-streams.jar]\\ |
After placing the files, restart the CrushFTP service to apply the changes.\\ |
__More info:__\\ |
• Supported Kafka Broker Version 4.0.x\\ |
• Minimum Java Version: Java 11 (supports 11, 17, 23)\\ |
At line 25 changed one line |
bootstrap.servers=192.168.10:9092 |
bootstrap.servers=192.168.0.10:9092 |
At line 47 added 4 lines |
#Optional |
acks=all |
retries=3 |
metadata.max.age.ms=3000 |
At line 49 removed one line |
• [SASL Authentication with PLAIN Mechanism Link|https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_plain.htm]\\ |
At line 51 changed one line |
__ KafkaTask SASL properties Explained__:\\ |
!! KafkaTask SASL properties Explained:\\ |
At line 54 changed 2 lines |
• Specifies the address of the Kafka broker to connect to.\\ |
• You can list multiple brokers separated by commas for failover: 192.168.10:9092,192.168.11:9092\\ |
• ⚠️ Specifies the address of the Kafka broker to connect to.\\ |
• You can list multiple brokers separated by commas for failover: 192.168.0.10:9092,192.168.0.11:9092\\ |
At line 64 changed one line |
• ⚠️ Do not change unless you have a custom serialization format on the receiving side.\\ |
• ⚠️ Do not change!\\ |
At line 72 removed 4 lines |
• SASL_PLAINTEXT means SASL authentication over a plaintext (non-encrypted) channel.\\ |
• Use SASL_SSL if you want to secure the connection. \\ |
• sasl.mechanism: The SASL mechanism used. PLAIN is username/password-based authentication.\\ |
\\ |
At line 78 changed one line |
• username / password: Replace these with your actual Kafka credentials.\\ |
• username / password: ⚠️ Replace these with your actual Kafka credentials.\\ |
At line 82 added 81 lines |
! Kafka Security Protocols:\\ |
\\ |
---- |
• __SASL_PLAINTEXT__ means SASL authentication over a plaintext (non-encrypted) channel.\\ |
---- |
• __SASL_SSL__: is a security protocol that combines SSL encryption with SASL authentication. SSL ensures that all data transmitted is encrypted, while SASL (e.g., PLAIN or SCRAM) handles user authentication over that secure channel. This provides both confidentiality and identity verification for Kafka clients and brokers. \\ |
Example: |
{{{ |
#Authentication related settings |
security.protocol=SASL_SSL |
sasl.mechanism=PLAIN |
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ |
username="kafka-username" \ |
password="kafka-password"; |
|
# SSL Truststore (used to verify the Kafka server cert) |
# The truststore contains public certificates of trusted servers — in this case, the Kafka server’s certificate. |
ssl.truststore.location=/Users/crushftp/kafka.server.truststore.jks |
ssl.truststore.password=truststorepass |
|
# Optional: Only needed if mutual TLS is required |
# It contains the client’s private key and its signed certificate (usually self-signed or signed by a CA). |
#Server Settings: ssl.client.auth=required -Tells the broker to request and validate a client certificate during the SSL handshake. |
ssl.keystore.location=/Users/crushftp/client.keystore.jks |
ssl.keystore.password=clientpass |
ssl.key.password=clientpass |
}}}\\ |
---- |
• __SASL_SSL with SCRAM-SHA-512 mechanism__: This configuration provides strong security by combining __SSL__ encryption with __SCRAM-SHA-512__ authentication. SSL ensures encrypted communication between clients and brokers, while SCRAM-SHA-512 performs secure password-based authentication using salted, hashed credentials. It’s a preferred option for production environments requiring both confidentiality and strong identity verification without relying on plaintext passwords.\\ |
\\ |
Example:\\ |
{{{ |
#Authentication related settings |
security.protocol=SASL_SSL |
sasl.mechanism=SCRAM-SHA-512 |
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \ |
username="kafka-username" \ |
password="kafka-password"; |
|
# SSL Truststore (used to verify the Kafka server cert) |
ssl.truststore.location=/Users/kz/crushftp/cert_or_keys/kafka.server.truststore.jks |
ssl.truststore.password=truststorepass |
|
# Optional: Only needed if mutual TLS is required |
ssl.keystore.location=/Users/kz/crushftp/cert_or_keys/client.keystore.jks |
ssl.keystore.password=clientpass |
ssl.key.password=clientpass |
}}} |
---- |
• __SASL with GSSAPI / Kerberos__: SASL/GSSAPI leverages Kerberos to perform strong, mutual authentication: clients obtain a service ticket from the KDC and present it to the broker, eliminating the need to send passwords over the wire. Once authenticated, it can be combined with SSL to provide both encryption and integrity for all Kafka traffic. This mechanism is ideal in enterprise environments where centralized credential management and single‐sign‐on are required.\\ |
\\ |
When using CrushFTP’s KafkaTask with Kerberos, you can authenticate via a __keytab__. Example:\\ |
{{{ |
# Authentication related settings |
security.protocol=SASL_SSL |
sasl.mechanism=GSSAPI |
sasl.kerberos.service.name=kafka |
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \ |
useKeyTab=true \ |
keyTab="/etc/security/kafka.keytab" \ |
principal="kafka@LMINT.COM"; |
|
# SSL Truststore (to verify the Kafka server cert) |
ssl.truststore.location=/etc/security/kafka.server.truststore.jks |
ssl.truststore.password=truststorepass |
|
# Optional: mutual TLS if broker demands it |
ssl.keystore.location=/etc/security/client.keystore.jks |
ssl.keystore.password=clientpass |
ssl.key.password=clientpass |
}}}\\ |
__Key points for GSSAPI / Kerberos setups__:\\ |
1. /etc/krb5.conf must define your realm and KDC.\\ |
2.Make sure your keytab contains the exact Kafka service principal (in our example kafka@LMINT.COM). Without that entry (and the correct KVNO [Wikipedia :Key Version Number Link|https://en.wikipedia.org/wiki/Kerberos_(protocol)#Key_Version_Number_(KVNO)]), the broker won’t be able to decrypt or verify incoming GSSAPI tickets.\\ |
3. When you enable ssl.client.auth=required on the broker, every client must present a valid TLS certificate during the handshake. Make sure each client has a keystore containing its private key and signed certificate, and a truststore that includes the CA (or broker) certificate, so both sides can authenticate and encrypt the connection -> SSL keystore/truststore files must exist and match your broker’s certificates.\\ |
---- |
__Optioanal:__\\ |
\\ |
__acks=all__: The producer will wait for all in-sync replicas (ISRs) to acknowledge the record before considering the write successful. Ensures that data isn’t lost if a broker crashes. Slightly increases latency compared to acks=1 or acks=0. \\ |
__retries=3__: The producer will retry sending a record up to 3 times if a transient failure occurs (e.g., timeout, disconnect).\\ |
__metadata.max.age.ms=3000__: The producer will refresh its cached metadata (e.g., topic partitions, broker info) every 3 seconds. Lower values increase responsiveness but may cause more overhead due to more frequent metadata fetches. Usually, it is 300,000 ms (5 minutes), so 3,000 ms is very aggressive and useful only in dynamic environments.\\ |