Installing Apache Kafka on Ubuntu 18.04

The publication will tell you how to install the Apache Kafka message broker on a server platform. Ubuntu Server 18.04 is used as the server OS.

A bit of theory

Apache Kafka is not just a messaging manager: it works with large amounts of data, alerting each user based on their subscriptions. The principle of the service is as follows.

In any organization, there are many applications and programs that generate various messages and alerts. On the other hand, there are monitoring systems that need information, for example, about user authorization in the 1C database. Apache Kafka provides a stable connection between two points, guaranteeing the delivery of packets. The undoubted advantage is that the list of messages is updated automatically.

The system is horizontally scalable and has a high level of fault tolerance compared to other similar products. It  is used by such large companies as Netflix, Twitter, etc.

Preliminary preparation

Apache Kafka is built in Java, i.e. it needs the appropriate components on the server to work. First, let’s install default-jre. Let’s get a user with the same name and password that will process broker requests:

sudo useradd kafka –m

sudo passwd kafka

Now let’s add an account name to the sudo group to get privileged access rights:

sudo adduser kafka sudo

Log in and proceed to the installation of the product:

su -l kafka

Distribution download

First, create a directory on your home partition to store the downloaded files:

mkdir ~/Downloads

Download the distribution from the official website of the product:

curl "http://www-eu.apache.org/dist/kafka/2.3.0/kafka-2.3.0-src.tgz" -o ~/Downloads/kafka.tgz

Important! As of August 22, 2019, the current release is 2.3.0.

Next, create a directory for unpacking the archive, go to it and extract the information:

mkdir ~/kafka

cd ~/kafka

tar -xvzf ~/Downloads/kafka.tgz --strip 1

Now let’s move on to configuring the solution.

Setting up Apache Kafka

The configuration template that comes with the broker is located in kafka/config/server.properties. To configure Kafka parameters for the required tasks, open it with a text editor and make the appropriate changes.

For example, let’s add the option to delete broker themes. To do this, open the file through the vi editor and add the following line.

delete.topic.enable = true

Initial start of Kafka on the server

Let’s configure the startup parameters of the message broker for the required tasks. To do this, we will use the ZooKeeper service. It keeps track of the state and configuration of Kafka.

Let’s create a configuration file for ZooKeeper:

sudo vi /etc/systemd/system/zookeeper.service

Let’s add the following information to it:

[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target

[Service]
Type=simple
User=kafka
ExecStart=/home/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multy-user.target

The file contains different blocks. For example, Unit is responsible for the health of the network and file system, and Install tells the ZooKeeper service to restart automatically if a failure occurs.

Now let’s create a systemd template for Apache Kafka.

sudo nano /etc/systemd/system/kafka.service

Let’s also add the following lines to it:

[Unit]
Requires=zookeeper.service
After=zookeeper.service

[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties > /home/kafka/kafka/kafka.log 2>&1'
ExecStop=/home/kafka/kafka/bin/kafka-server-stop.sh
Restart=on-abnormal

[Install]
WantedBy=multy-user.target

In this file, each section is also responsible for different conditions.

After creating the templates, let’s start the kafka service. If it is necessary that the service be launched together with the server part, then we write:

sudo systemctl enable kafka

Let’s test the functionality of the message broker.

Testing

To test, let’s create the phrase “Good morning”, and then eliminate it. In order for a message to be published, a producer (the one who publishes) and a consumer (the one who reads) are needed.

1. We come up with a topic in which the publication will be. Let’s name it Demo:

~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Demo

2. Using the internal command kafka-console-producer.sh of the broker, we create a producer. On his behalf, we make a publication in the Demo topic:

echo "Good Morning" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Demo > /dev/null

3. Now let’s create a consumer that will read the post in the topic. To do this, we also use the internal script kafka-console-consumer.sh:

~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic Demo --from-beginning

4. As a result, “Good Morning” will appear on the monitor. To stop testing, press the key combination Ctrl+C.

 

Welcome to the world of DomainRooster, where roosters (and hens) rule the roost! We're a one-stop shop for all your entrepreneurial needs, bringing together domain names and website hosting, and all the tools you need to bring your ideas to life. With our help, you'll soar to new heights and hatch great success. Think of us as your trusty sidekick, always there to lend a wing and help you navigate the sometimes-complex world of domain names and web hosting. Our team of roosters are experts in their fields and are always on hand to answer any questions and provide guidance. So why wait? Sign up today and join the ranks of the world's greatest entrepreneurs. With DomainRooster, the sky's the limit! And remember, as the saying goes, "Successful people do what unsuccessful people are not willing to do." So don't be afraid to take that leap of faith - DomainRooster is here to help you reach for the stars. Caw on!