您现在访问的是微软AZURE全球版技术文档网站,若需要访问由世纪互联运营的MICROSOFT AZURE中国区技术文档网站,请访问 https://docs.azure.cn.

设置 Azure HDInsight 中 Apache Kafka 的安全套接字层(SSL)加密和身份验证Set up Secure Sockets Layer (SSL) encryption and authentication for Apache Kafka in Azure HDInsight

本文介绍如何在 Apache Kafka 客户端和 Apache Kafka 代理之间设置 SSL 加密。This article shows you how to set up SSL encryption between Apache Kafka clients and Apache Kafka brokers. 还介绍了如何设置客户端的身份验证(有时称为 "双向 SSL")。It also shows you how to set up authentication of clients (sometimes referred to as two-way SSL).

重要

有两个可用于 Kafka 应用程序的客户端: Java 客户端和控制台客户端。There are two clients which you can use for Kafka applications: a Java client and a console client. 只有 Java 客户端 ProducerConsumer.java 可以使用 SSL 来生成和使用。Only the Java client ProducerConsumer.java can use SSL for both producing and consuming. 控制台生成方客户端 console-producer.sh 不与 SSL 一起使用。The console producer client console-producer.sh does not work with SSL.

Apache Kafka broker 安装程序Apache Kafka broker setup

Kafka SSL 代理安装程序将按以下方式使用四个 HDInsight 群集 Vm:The Kafka SSL broker setup will use four HDInsight cluster VMs in the following way:

  • 头节点 0-证书颁发机构(CA)headnode 0 - Certificate Authority (CA)
  • 辅助节点0、1和 2-代理worker node 0, 1, and 2 - brokers

备注

本指南使用自签名证书,但最安全的解决方案是使用受信任 CA 颁发的证书。This guide will use self-signed certificates, but the most secure solution is to use certificates issued by trusted CAs.

代理安装过程的摘要如下:The summary of the broker setup process is as follows:

  1. 以下步骤将在三个工作节点的每个节点上重复:The following steps are repeated on each of the three worker nodes:

    1. 生成证书。Generate a certificate.
    2. 创建证书签名请求。Create a cert signing request.
    3. 将证书签名请求发送到证书颁发机构(CA)。Send the cert signing request to the Certificate Authority (CA).
    4. 登录到 CA 并为请求签名。Sign in to the CA and sign the request.
    5. 将已签名的证书 SCP 回辅助角色节点。SCP the signed certificate back to the worker node.
    6. SCP 向工作节点颁发 CA 的公共证书。SCP the public certificate of the CA to the worker node.
  2. 拥有所有证书后,将证书放入证书存储区。Once you have all of the certificates, put the certs into the cert store.

  3. 转到 Ambari 并更改配置。Go to Ambari and change the configurations.

使用以下详细说明来完成代理安装:Use the following detailed instructions to complete the broker setup:

重要

在以下代码片段中,wnX 是三个工作节点之一的缩写,应根据需要将其替换为 wn0wn1wn2In the following code snippets wnX is an abbreviation for one of the three worker nodes and should be substituted with wn0, wn1 or wn2 as appropriate. 应将 WorkerNode0_NameHeadNode0_Name 替换为各自计算机的名称。WorkerNode0_Name and HeadNode0_Name should be substituted with the names of the respective machines.

  1. 在头节点0上执行初始安装,对于 HDInsight,它将填充证书颁发机构(CA)的角色。Perform initial setup on head node 0, which for HDInsight will fill the role of the Certificate Authority (CA).

    # Create a new directory 'ssl' and change into it
    mkdir ssl
    cd ssl
    
  2. 对每个代理执行相同的初始设置(辅助节点0、1和2)。Perform the same initial setup on each of the brokers (worker nodes 0, 1 and 2).

    # Create a new directory 'ssl' and change into it
    mkdir ssl
    cd ssl
    
  3. 在每个工作节点上,使用下面的代码段执行以下步骤。On each of the worker nodes, execute the following steps using the code snippet below.

    1. 创建一个密钥存储,并使用新的私有证书来填充它。Create a keystore and populate it with a new private certificate.
    2. 创建证书签名请求。Create a certificate signing request.
    3. SCP 证书签名请求到 CA (headnode0)SCP the certificate signing request to the CA (headnode0)
    keytool -genkey -keystore kafka.server.keystore.jks -validity 365 -storepass "MyServerPassword123" -keypass "MyServerPassword123" -dname "CN=FQDN_WORKER_NODE" -storetype pkcs12
    keytool -keystore kafka.server.keystore.jks -certreq -file cert-file -storepass "MyServerPassword123" -keypass "MyServerPassword123"
    scp cert-file sshuser@HeadNode0_Name:~/ssl/wnX-cert-sign-request
    
  4. 在 CA 计算机上,运行以下命令以创建 ca 证书和 ca 密钥文件:On the CA machine run the following command to create ca-cert and ca-key files:

    openssl req -new -newkey rsa:4096 -days 365 -x509 -subj "/CN=Kafka-Security-CA" -keyout ca-key -out ca-cert -nodes
    
  5. 更改为 CA 计算机并签署所有接收到的证书签名请求:Change to the CA machine and sign all of the received cert signing requests:

    openssl x509 -req -CA ca-cert -CAkey ca-key -in wn0-cert-sign-request -out wn0-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
    openssl x509 -req -CA ca-cert -CAkey ca-key -in wn1-cert-sign-request -out wn1-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
    openssl x509 -req -CA ca-cert -CAkey ca-key -in wn2-cert-sign-request -out wn2-cert-signed -days 365 -CAcreateserial -passin pass:"MyServerPassword123"
    
  6. 将已签名的证书从 CA (headnode0)发送回辅助角色节点。Send the signed certificates back to the worker nodes from the CA (headnode0).

    scp wn0-cert-signed sshuser@WorkerNode0_Name:~/ssl/cert-signed
    scp wn1-cert-signed sshuser@WorkerNode1_Name:~/ssl/cert-signed
    scp wn2-cert-signed sshuser@WorkerNode2_Name:~/ssl/cert-signed
    
  7. 将 CA 的公共证书发送到每个辅助角色节点。Send the public certificate of the CA to each worker node.

    scp ca-cert sshuser@WorkerNode0_Name:~/ssl/ca-cert
    scp ca-cert sshuser@WorkerNode1_Name:~/ssl/ca-cert
    scp ca-cert sshuser@WorkerNode2_Name:~/ssl/ca-cert
    
  8. 在每个辅助角色节点上,将 Ca 公共证书添加到 truststore 和密钥存储。On each worker node, add the CAs public certificate to the truststore and keystore. 然后,将辅助角色节点的签名证书添加到密钥存储Then add the worker node's own signed certificate to the keystore

    keytool -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
    keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
    keytool -keystore kafka.server.keystore.jks -import -file cert-signed -storepass "MyServerPassword123" -keypass "MyServerPassword123" -noprompt
    
    

将 Kafka 配置更新为使用 SSL 并重启代理Update Kafka configuration to use SSL and restart brokers

现在,你已设置了包含密钥存储和 truststore 的每个 Kafka 代理,并导入了正确的证书。You have now set up each Kafka broker with a keystore and truststore, and imported the correct certificates. 接下来,请使用 Ambari 修改相关的 Kafka 配置属性,然后重启 Kafka 代理。Next, modify related Kafka configuration properties using Ambari and then restart the Kafka brokers.

若要完成配置修改,请按照以下步骤操作:To complete the configuration modification, do the following steps:

  1. 登录到 Azure 门户,并选择你的 Azure HDInsight Apache Kafka 群集。Sign in to the Azure portal and select your Azure HDInsight Apache Kafka cluster.

  2. 单击群集仪表板下面的“Ambari 主页”转到 Ambari UI。Go to the Ambari UI by clicking Ambari home under Cluster dashboards.

  3. 在“Kafka 代理”下,将 listeners 属性设置为 PLAINTEXT://localhost:9092,SSL://localhost:9093Under Kafka Broker set the listeners property to PLAINTEXT://localhost:9092,SSL://localhost:9093

  4. 在“高级 kafka-broker”下,将 security.inter.broker.protocol 属性设置为 SSLUnder Advanced kafka-broker set the security.inter.broker.protocol property to SSL

    在 Ambari 中编辑 Kafka ssl 配置属性

  5. 在“自定义 kafka-broker”下,将 ssl.client.auth 属性设置为 requiredUnder Custom kafka-broker set the ssl.client.auth property to required. 仅在设置身份验证和加密时,才需要执行此步骤。This step is only required if you are setting up authentication and encryption.

    在 Ambari 中编辑 kafka ssl 配置属性

  6. 将新的配置属性添加到服务器的属性文件中。Add new configuration properties to the server.properties file.

    # Configure Kafka to advertise IP addresses instead of FQDN
    IP_ADDRESS=$(hostname -i)
    echo advertised.listeners=$IP_ADDRESS
    sed -i.bak -e '/advertised/{/advertised@/!d;}' /usr/hdp/current/kafka-broker/conf/server.properties
    echo "advertised.listeners=PLAINTEXT://$IP_ADDRESS:9092,SSL://$IP_ADDRESS:9093" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.keystore.location=/home/sshuser/ssl/kafka.server.keystore.jks" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.keystore.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.key.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.truststore.location=/home/sshuser/ssl/kafka.server.truststore.jks" >> /usr/hdp/current/kafka-broker/conf/server.properties
    echo "ssl.truststore.password=MyServerPassword123" >> /usr/hdp/current/kafka-broker/conf/server.properties
    
  7. 中转到 Ambari 配置 UI,并验证新属性是否显示在Advanced kafkakafka 模板属性下。Go to Ambari configuration UI and verify that the new properties show up under Advanced kafka-env and the kafka-env template property.

    编辑 Ambari 中的 kafka 模板属性

  8. 重启所有 Kafka 代理。Restart all Kafka brokers.

  9. 使用 "创建者" 和 "使用者" 选项启动管理客户端,验证创建者和使用者是否在端口9093上工作。Start the admin client with producer and consumer options to verify that both producers and consumers are working on port 9093.

客户端设置(不使用身份验证)Client setup (without authentication)

如果不需要身份验证,则仅设置 SSL 加密的步骤摘要为:If you don't need authentication, the summary of the steps to set up only SSL encryption are:

  1. 登录到 CA (活动头节点)。Sign in to the CA (active head node).
  2. 从 CA 计算机将 CA 证书复制到客户端计算机(wn0)。Copy the CA cert to client machine from the CA machine (wn0).
  3. 登录到客户端计算机(hn1),然后导航到 ~/ssl 文件夹。Sign in to the client machine (hn1) and navigate to the ~/ssl folder.
  4. 将 CA 证书导入到 truststore。Import the CA cert to the truststore.
  5. 将 CA 证书导入到密钥存储。Import the CA cert to the keystore.

以下代码片段详细说明了这些步骤。These steps are detailed in the following code snippets.

  1. 登录到 CA 节点。Sign in to the CA node.

    ssh sshuser@HeadNode0_Name
    cd ssl
    
  2. 将 ca 证书复制到客户端计算机Copy the ca-cert to the client machine

    scp ca-cert sshuser@HeadNode1_Name:~/ssl/ca-cert
    
  3. 登录到客户端计算机(备用头节点)。Sign in to the client machine (standby head node).

    ssh sshuser@HeadNode1_Name
    cd ssl
    
  4. 将 CA 证书导入 truststore。Import the CA certificate to the truststore.

    keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
    
  5. 将 CA 证书导入到密钥存储。Import the CA cert to keystore.

    keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt
    
  6. 创建文件 client-ssl-auth.propertiesCreate the file client-ssl-auth.properties. 该文件应包含以下行:It should have the following lines:

    security.protocol=SSL
    ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks
    ssl.truststore.password=MyClientPassword123
    

客户端设置(使用身份验证)Client setup (with authentication)

备注

仅当同时设置了 SSL 加密身份验证时,才需要执行以下步骤。The following steps are required only if you are setting up both SSL encryption and authentication. 如果只是要设置加密,请参阅客户端安装(不进行身份验证)。If you are only setting up encryption, then see Client setup without authentication.

以下四个步骤汇总了完成客户端安装所需的任务:The following four steps summarize the tasks needed to complete the client setup:

  1. 登录到客户端计算机(备用头节点)。Sign in to the client machine (standby head node).
  2. 创建 Java 密钥存储并获取代理的已签名证书。Create a java keystore and get a signed certificate for the broker. 然后将该证书复制到运行 CA 的 VM。Then copy the certificate to the VM where the CA is running.
  3. 切换到 CA 计算机(活动头节点)以对客户端证书进行签名。Switch to the CA machine (active head node) to sign the client certificate.
  4. 转到客户端计算机(备用头节点)并导航到 ~/ssl 文件夹。Go to the client machine (standby head node) and navigate to the ~/ssl folder. 将已签名的证书复制到客户端计算机。Copy the signed cert to client machine.

下面给出了每个步骤的详细信息。The details of each step are given below.

  1. 登录到客户端计算机(备用头节点)。Sign in to the client machine (standby head node).

    ssh sshuser@HeadNode1_Name
    
  2. 删除任何现有的 ssl 目录。Remove any existing ssl directory.

    rm -R ~/ssl
    mkdir ssl
    cd ssl
    
  3. 创建 java 密钥存储并创建证书签名请求。Create a java keystore and create a certificate signing request.

    keytool -genkey -keystore kafka.client.keystore.jks -validity 365 -storepass "MyClientPassword123" -keypass "MyClientPassword123" -dname "CN=HEADNODE1_FQDN" -storetype pkcs12
    
    keytool -keystore kafka.client.keystore.jks -certreq -file client-cert-sign-request -storepass "MyClientPassword123" -keypass "MyClientPassword123"
    
  4. 将证书签名请求复制到 CACopy the certificate signing request to the CA

    scp client-cert-sign-request sshuser@HeadNode0_Name:~/ssl/client-cert-sign-request
    
  5. 切换到 CA 计算机(活动头节点),并对客户端证书进行签名。Switch to the CA machine (active head node) and sign the client certificate.

    ssh sshuser@HeadNode0_Name
    cd ssl
    openssl x509 -req -CA ca-cert -CAkey ca-key -in ~/ssl/client-cert-sign-request -out ~/ssl/client-cert-signed -days 365 -CAcreateserial -passin pass:MyClientPassword123
    
  6. 将签名的客户端证书从 CA (活动头节点)复制到客户端计算机。Copy signed client cert from the CA (active head node) to client machine.

    scp client-cert-signed sshuser@HeadNode1_Name:~/ssl/client-signed-cert
    
  7. 将 ca 证书复制到客户端计算机Copy the ca-cert to the client machine

    scp ca-cert sshuser@HeadNode1_Name:~/ssl/ca-cert
    
  8. 创建带签名证书的客户端存储,并将 ca 证书导入密钥存储和 truststore:Create client store with signed cert, and import ca cert into the keystore and truststore:

    keytool -keystore kafka.client.keystore.jks -import -file client-cert-signed -storepass MyClientPassword123 -keypass MyClientPassword123 -noprompt
    
    keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass MyClientPassword123 -keypass MyClientPassword123 -noprompt
    
    keytool -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass MyClientPassword123 -keypass MyClientPassword123 -noprompt
    
  9. client-ssl-auth.properties创建文件。Create a file client-ssl-auth.properties. 该文件应包含以下行:It should have the following lines:

    security.protocol=SSL
    ssl.truststore.location=/home/sshuser/ssl/kafka.client.truststore.jks
    ssl.truststore.password=MyClientPassword123
    ssl.keystore.location=/home/sshuser/ssl/kafka.client.keystore.jks
    ssl.keystore.password=MyClientPassword123
    ssl.key.password=MyClientPassword123
    

验证Verification

备注

如果安装了 HDInsight 4.0 和 Kafka 2.1,则可以使用控制台制造者/使用者来验证你的设置。If HDInsight 4.0 and Kafka 2.1 is installed, you can use the console producer/consumers to verify your setup. 如果没有,请在端口9092上运行 Kafka 生成程序并向主题发送消息,然后在使用 SSL 的端口9093上使用 Kafka 使用者。If not, run the Kafka producer on port 9092 and send messages to the topic, and then use the Kafka consumer on port 9093 which uses SSL.

Kafka 2.1 或更高版本Kafka 2.1 or above

  1. 如果主题尚不存在,请创建一个。Create a topic if it doesn't exist already.

    /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE>:2181 --create --topic topic1 --partitions 2 --replication-factor 2
    
  2. 启动控制台制造者,并提供 client-ssl-auth.properties 为制造者的配置文件的路径。Start console producer and provide the path to client-ssl-auth.properties as a configuration file for the producer.

    /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9093 --topic topic1 --producer.config ~/ssl/client-ssl-auth.properties
    
  3. 打开到客户端计算机的另一个 ssh 连接并启动控制台使用者,并提供 client-ssl-auth.properties 为使用者的配置文件的路径。Open another ssh connection to client machine and start console consumer and provide the path to client-ssl-auth.properties as a configuration file for the consumer.

    /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
    

Kafka 1。1Kafka 1.1

  1. 如果主题尚不存在,请创建一个。Create a topic if it doesn't exist already.

    /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE_0>:2181 --create --topic topic1 --partitions 2 --replication-factor 2
    
  2. 启动控制台制造者,并提供客户端 ssl 身份验证的路径作为制造者的配置文件。Start console producer and provide the path to client-ssl-auth.properties as a configuration file for the producer.

    /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list <FQDN_WORKER_NODE>:9092 --topic topic1 
    
  3. 打开到客户端计算机的另一个 ssh 连接并启动控制台使用者,并提供 client-ssl-auth.properties 为使用者的配置文件的路径。Open another ssh connection to client machine and start console consumer and provide the path to client-ssl-auth.properties as a configuration file for the consumer.

    $ /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning
    

后续步骤Next steps