Once you have Kafka up and running and you write events to a topic, you might want to do some preprocessing. Kafka streaming would be the tool of choice here. In the following I’ll give an example which is very much based on the official documentation and an example from the book “Kafka Streams in Action” from Bill Bejeck.
Install maven
sudo apt install maven
mvn archetype:generate
-DarchetypeGroupId=org.apache.kafka
-DarchetypeArtifactId=streams-quickstart-java
-DarchetypeVersion=2.7.0
-DgroupId=streams.examples
-DartifactId=streams.examples
-Dversion=0.1
-Dpackage=myapps
mnv clean install
mvn exec:java -Dexec.mainClass=myapps.Filter
Since the topology gets printed you should see something like this as output in the shell:
Topologies:
Sub-topology: 0
Source: KSTREAM-SOURCE-0000000000 (topics: [test])
--> KSTREAM-MAPVALUES-0000000001
Processor: KSTREAM-MAPVALUES-0000000001 (stores: [])
--> KSTREAM-SINK-0000000002
<-- KSTREAM-SOURCE-0000000000
Sink: KSTREAM-SINK-0000000002 (topic: filteredIP)
<-- KSTREAM-MAPVALUES-0000000001
This means the program reads in from topic “test” and outputs to topic “filteredIP”. You can check this by adding a kafka consumer to the topic “filteredIP”:
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topilteredIP --from-beginning
Was this helpful?
0 / 0