Configuring Apache Kafka with Hyperledger Fabric 1.2 on Multiple Hosts

Mallikarjun Sarvepalli
4 min readJan 10, 2019

Introduction

In this article, I am going to integrate apache Kafka with Hyperledger Fabric 1.2 and use Kafka-manager to monitor installed Kafka brokers and topics.

Topology

Below topology will be used as a reference throughout the document. However, associated scripts can be configured to work with any topology.

  • Two Organizations each with 2 peers
  • 1 Orderer
  • 4 Kafka brokers
  • 3 Zookeepers
  • Kafka-Manager

Deployment Topology

Host-1

  • Orderer0 (orderer0.example.com) -port 7050
  • Zookeeper0(zookeeper0.example.com) — port 2181,2888,3888
  • Kafka0 (kafka0.example.com) — Port 9092

Host-2

  • CA1 (ca.org1.example.com) -port 7054
  • Org1 Peer0 (peer0.org1.example.com) — port 7051,7053
  • Org1 Peer1 (peer1.org1.example.com) -port 7053,8053
  • Kafka1 (kafka1.example.com) — Port 9092
  • Zookeeper1(zookeeper1.example.com) — port 2181,2888,3888

Host-3

  • CA2 (ca.org2.example.com) -port 7054
  • Org2 Peer0 (peer0.org2.example.com) — port 7051,7053
  • Org2 Peer1 (peer1.org2.example.com) -port 7053,8053
  • Kafka2 (kafka2.example.com) — Port 9092
  • Zookeeper2(zookeeper2.example.com) — port 2181,2888,3888

Host-4

  • Kafka3 (kafka3.example.com) — Port 9092
  • Kafka-Manager — port 9000

Make sure above specified ports are not blocked with firewall. Open 2377,7946,4789 ports for docker swarm.

Prerequisites

Follow this link to install all the dependencies on each host.

List of components that I used

  • Ubuntu-16.04
  • Fabric — 1.2
  • Go — 1.9.3 (Install on all hosts)
  • Docker -18.03.1-ce (Install on all hosts)
  • Docker-Compose — 1.18.0 (Install on all hosts)
  • Node — v8.11.3 (Install on all hosts)

Fabric 1.2 Setup with Apache Kafka

Execute below steps 1–3 on all hosts (host-1,2,3,&4)

1) Clone Hyperledger fabric-samples

2) go to fabric-samples directory

3) clone Fabric Multi Network source code from

https://github.com/mallikprojects/fabric-multi-network.git

git clone -b release-1.2 https://github.com/hyperledger/fabric-samplescd fabric-samplescurl -sSL http://bit.ly/2ysbOFE | bash -s 1.2.0git clone -b fabric-1.2-kafka-couch-support https://github.com/mallikprojects/fabric-multi-network.gitcd fabric-multi-network

4) configure environmental variables in bymn.sh and ensure bymn.sh is same across all hosts

  • export ORDERER0_HOSTNAME=<host name of host-1>
  • export ORG1_HOSTNAME= <host name of host-2>
  • export ORG2_HOSTNAME=<host name of host-3>
  • export SWARM_NETWORK= ”fabric” — Change it only if you want to create swarm network with different swarm network name
  • export DOCKER_STACK= ”fabric” — Change it only if you want to create swarm network with different stack name
  • export KAFKA0_HOSTNAME=”host name of host-1"
  • export KAFKA1_HOSTNAME=”host name of host-2"
  • export KAFKA2_HOSTNAME=”host name of host-3"
  • export KAFKA3_HOSTNAME=”host name of host-4"
  • export ZK0_HOSTNAME = “host name 0f host-1 ”
  • export ZK1_HOSTNAME = “host name 0f host-2 ”
  • export ZK2_HOSTNAME = “host name 0f host-3 ”

To get hostname run the following command

hostname

5) Fabric configuration for Apache kafka

set OrderType to kafka in configtx.yaml

OrdererType: kafka

update configtx.yaml with kafka brokers

Brokers:- kafka0.example.com:9092- kafka1.example.com:9092- kafka2.example.com:9092- kafka3.example.com:9092

Rename kafka_scripts_bkup to kafka_scripts and zk_scripts_bkup to zk_scripts

mv kafka_scripts_bkup kafka_scripts
mv zk_scripts_bkup zk_scripts

Update Kafka brokers in case of any addition/removal of brokers.

6) Generate crypto data, docker compose templates and copy to all the hosts

./bymn.sh generate crypto-config.yaml//if you want to use couchDB as state database use./bymn.sh generate -s couchdb crypto-config.yaml

crypto-config — copy to all hosts

channel-artifacts — copy to all hosts

6) Create & setup Swarm

On host-1

docker swarm init  --advertise-addr <PC-1 IP address>docker swarm join-token manager

Output of this command should be executed on host-2, host-3 & host-4 as follows

docker swarm join — token SWMTKN-1–3anjn4oxwcn278hie3413zaakr4higjdqr2x89r5605p1dosui-a4u407pt6c5ta2ont7pqdnm 137.116.147.36:2377 –advertise-addr <host-2 Ip Address>docker swarm join — token SWMTKN-1–3anjn4oxwcn278hie3413zaakr4higjdqr2x89r5605p1dosui-a4u407pt6c5ta2ont7pqdnm 137.116.147.36:2377 –advertise-addr <host-3 Ip Address>docker swarm join — token SWMTKN-1–3anjn4oxwcn278hie3413zaakr4higjdqr2x89r5605p1dosui-a4u407pt6c5ta2ont7pqdnm 137.116.147.36:2377 –advertise-addr <host-4 Ip Address>

Replace token with the token you got in the previous command response

7) Create an overlay network

on host-1

docker network create --attachable  --driver overlay fabric

8) Start Fabric network with kafka

on host-1

./bymn.sh up

Check the logs and make sure service is up.

9) Test your network

To test network on PC-2 open cli

docker exec -it <org2cli docker container id> bash
$ ./scripts/script.sh

If you see above success message in your console, then network setup is proper and is running correctly

Install and configure Kafka-Manager to manage installed Kafka brokers

This section lists the steps required to Install Kafka manager on host-4. Kafka-manager uses port 9000. Make sure this port is opened

Reference: https://github.com/yahoo/kafka-manager

on host-4 run the following commands

1)cd ~2) git clone https://github.com/yahoo/kafka-manager.git3) cd kafka-manager4) sudo apt-get update5) sudo apt install default-jdk6) vim conf/application.confconfigure zookeeper settingskafka-manager.zkhosts="<ip address of zookeepr1>:2181,<ip address of zookeepr2>:2181,<ip address of zookeepr3>:2181"close file$ ./sbt clean dist

After extracting the produced zip file, and changing the working directory to it, you can run the service like this:

bin/kafka-manager

This will start kafka-manager on port 9000

This completes the setup of Kafka Manager

Open browser and open http://<host4 -ipadress>:9000/ and add cluster with <ip address of zookeepr1>:2181,<ip address of zookeepr2>:2181,<ip address of zookeepr3>:2181.

On successful creation, you should be able to see cluster details with list of channels created in fabric as below

Success. Your Kafka manager is configured with Fabric network

--

--