Elasticsearch + Logstash + kafka + Filebeatでログ転送し、Kibanaで見る
Elasticsearchの構築
- apt-transport-httpsをインストールします。
apt-get install apt-transport-https
- Elasticsearchの PGP Keyを追加します。
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
apt-get update
apt-get install elasticsearch
- インストールすると以下のように情報が出力されます。今回使用するのはパスワードとトークン生成のコマンドです。
--------------------------- Security autoconfiguration information ------------------------------
Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.
The generated password for the elastic built-in superuser is : 9x5Yi6NsOXW+iwUOkr21
If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
after creating an enrollment token on your existing cluster.
You can complete the following actions at any time:
Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.
Generate an enrollment token for Kibana instances with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.
Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service
root@ttmp-virtual-machine:~#
root@ttmp-virtual-machine:~# systemctl enable elasticsearch.service
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /lib/systemd/system/elasticsearch.service.
root@ttmp-virtual-machine:~#
curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
{
"name" : "ttmp-virtual-machine",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "-3HRShG-RP2ZJcQ2e-tNdw",
"version" : {
"number" : "8.12.0",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "1665f706fd9354802c02146c1e6b5c0fbcddfbc9",
"build_date" : "2024-01-11T10:05:27.953830042Z",
"build_snapshot" : false,
"lucene_version" : "9.9.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0#
- indexを作成するには、以下のように実行します。
root@ttmp-virtual-machine:~# export index_name="test"
root@ttmp-virtual-machine:~# curl -X PUT --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic "https://localhost:9200/${index_name}?pretty"
Enter host password for user 'elastic':
{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "test"
}
root@ttmp-virtual-machine:~#
Kibanaの構築
apt install kibana
systemctl daemon-reload
systemctl enable kibana.service
http://${IP}:5601/
にアクセスして、以下のような画面が出ればOKです。
- tokenを生成します。ここで出力された文字列を先ほど表示された画面に貼り付けて、Configure Elasticを押下します。
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana
- すると数字を入力するように促されます。また、以下のコマンドを実行するように促されるので、実行し、数字を取得します。取得した数字を画面に入力すればOKです。
/usr/share/kibana/bin/kibana-verification-code
- Kibanaの画面が表示されました。
Logstash構築
wget https://artifacts.elastic.co/downloads/logstash/logstash-8.12.0-linux-x86_64.tar.gz
tar -xzf logstash-8.12.0-linux-x86_64.tar.gz
cd logstash-8.12.0
- 設定ファイルを編集します。事前にindexを作成しておきます。たとえばCSVファイルを取得する時は以下のようにします。
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0# cat config/logstash.conf
input {
file {
mode => "tail"
path => ["/root/logstash/logstash-8.12.0/log/*_log.csv"]
sincedb_path => "/root/logstash/logstash-8.12.0/log/sincedb"
start_position => "beginning"
codec => plain {
charset => "UTF-8"
}
}
}
filter {
csv {
columns => ["Date", "Level", "ErrorMessage","UserId"]
convert => {
"UserId" => "integer"
}
skip_header => true
}
date {
match => ["Date", "yyyy-MM-dd HH:mm:ss"]
}
}
output {
elasticsearch {
hosts => ["https://localhost:9200"]
index => "log"
ssl_certificate_verification => false
user => "elastic"
password => "elastic"
}
stdout {
codec => rubydebug
}
}
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0#
- kafka連携する場合は、以下のようにします。まだkafkaが構築できていないので、いったんここまでです。
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0# cat config/kafka.conf
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["quickstart-events"]
}
}
output {
elasticsearch {
hosts => ["https://localhost:9200"]
index => "log2"
ssl_certificate_verification => false
user => "elastic"
password => "elastic"
}
stdout {
codec => rubydebug
}
}
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0#
kafka構築
wget https://downloads.apache.org/kafka/3.5.1/kafka_2.12-3.5.1.tgz
tar xzf kafka_2.12-3.5.1.tgz
mv kafka_2.12-3.5.1 /usr/local/kafka
apt install openjdk-8-jre-headless
sudo systemctl restart zookeeper
sudo systemctl restart kafka
- 動作確認をします。ディレクトリを移動しましょう。(内容はkafkaのクイックスタートの内容です。)
cd /usr/local/kafka
bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
filebeats構築
apt -y install filebeat
/etc/filebeat/filebeat.yml
を構成します。kafkaのアドレス等をしていします。
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: filestream
id: my-filestream-id
enabled: true
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.kafka:
hosts: ["localhost:9092"]
topic: "quickstart-events"
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
max_message_bytes: 1000000
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
root@ttmp-virtual-machine:~/logstash/logstash-8.12.0#
/etc/filebeat/filebeat.reference.yml
を構成します。かなり長いファイルなので、抜粋です。
1 ######################## Filebeat Configuration ############################
2
3 # This file is a full configuration example documenting all non-deprecated
4 # options in comments. For a shorter configuration example, that contains only
5 # the most common options, please see filebeat.yml in the same directory.
6 #
7 # You can find the full configuration reference here:
8 # https://www.elastic.co/guide/en/beats/filebeat/index.html
9
10
11 #========================== Modules configuration =============================
12 filebeat.modules:
13
14 #-------------------------------- System Module --------------------------------
15 - module: system
16 Syslog
17 syslog:
18 enabled: true
実行
- Elasticsearchはサービスとして起動しています。またkafkaも同様です。なので、LogstashとFilebeatを起動します。
- Logstashを起動します
bin/logstash -f config/kafka.conf
systemctl enable --now filebeat
- Logstash側には取得したデータが出力されます。
{
"event" => {
"original" => "{\"@timestamp\":\"2024-01-30T13:30:04.537Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"8.12.0\"},\"log\":{\"offset\":23898,\"file\":{\"device_id\":\"2051\",\"inode\":\"538374\",\"path\":\"/var/log/auth.log\"}},\"message\":\"Jan 30 22:30:01 ttmp-virtual-machine CRON[5123]: pam_unix(cron:session): session closed for user root\",\"input\":{\"type\":\"filestream\"},\"ecs\":{\"version\":\"8.0.0\"},\"host\":{\"containerized\":false,\"ip\":[\"192.168.11.11\",\"fe80::e996:c5c2:2a4d:604c\"],\"mac\":[\"00-0C-29-2A-D6-DD\"],\"name\":\"ttmp-virtual-machine\",\"hostname\":\"ttmp-virtual-machine\",\"architecture\":\"x86_64\",\"os\":{\"version\":\"22.04.3 LTS (Jammy Jellyfish)\",\"family\":\"debian\",\"name\":\"Ubuntu\",\"kernel\":\"6.5.0-15-generic\",\"codename\":\"jammy\",\"type\":\"linux\",\"platform\":\"ubuntu\"},\"id\":\"57212799f4764d7694dd1596f388870b\"},\"agent\":{\"version\":\"8.12.0\",\"ephemeral_id\":\"282a73ce-7ae7-4b70-a951-6afb4d2a673f\",\"id\":\"8494c139-c901-4048-bd69-3ca6a2c519fe\",\"name\":\"ttmp-virtual-machine\",\"type\":\"filebeat\"}}"
},
"@timestamp" => 2024-01-30T13:30:16.036577925Z,
"@version" => "1",
"message" => "{\"@timestamp\":\"2024-01-30T13:30:04.537Z\",\"@metadata\":{\"beat\":\"filebeat\",\"type\":\"_doc\",\"version\":\"8.12.0\"},\"log\":{\"offset\":23898,\"file\":{\"device_id\":\"2051\",\"inode\":\"538374\",\"path\":\"/var/log/auth.log\"}},\"message\":\"Jan 30 22:30:01 ttmp-virtual-machine CRON[5123]: pam_unix(cron:session): session closed for user root\",\"input\":{\"type\":\"filestream\"},\"ecs\":{\"version\":\"8.0.0\"},\"host\":{\"containerized\":false,\"ip\":[\"192.168.11.11\",\"fe80::e996:c5c2:2a4d:604c\"],\"mac\":[\"00-0C-29-2A-D6-DD\"],\"name\":\"ttmp-virtual-machine\",\"hostname\":\"ttmp-virtual-machine\",\"architecture\":\"x86_64\",\"os\":{\"version\":\"22.04.3 LTS (Jammy Jellyfish)\",\"family\":\"debian\",\"name\":\"Ubuntu\",\"kernel\":\"6.5.0-15-generic\",\"codename\":\"jammy\",\"type\":\"linux\",\"platform\":\"ubuntu\"},\"id\":\"57212799f4764d7694dd1596f388870b\"},\"agent\":{\"version\":\"8.12.0\",\"ephemeral_id\":\"282a73ce-7ae7-4b70-a951-6afb4d2a673f\",\"id\":\"8494c139-c901-4048-bd69-3ca6a2c519fe\",\"name\":\"ttmp-virtual-machine\",\"type\":\"filebeat\"}}"
}
- kibanaで取り込まれているかを確認します。
- 取り込まれていることが確認できました。