通过ELK快速搭建一个你可能需要的集中化日志平台

小说:网络灰色擦边项目作者:安纯更新时间:2018-11-20字数:45118

他面对的是体育学院的一个学生,这个学生在上台之前便被他的老师严令一定要胜过叶扬。

投票员兼职是真的吗

他突然大喝一声,身体从泥土中冲了出来。与他同时冲出来的还有一尊足有六米大小的岩石巨人。
这些神之傀儡不管是用来战斗还是直接用它们当做是材料都绝对是一流,而且如果掌握了这种傀儡的技术相互印证的话绝对会有所收获,对自己的炼器术也是一种积累和提升。

对于这种战斗,叶扬根本就没有丝毫的担心。他站在一旁,抱着手,在那里看着叶开和那个机器守卫动手。

通过ELK快速搭建一个你可能需要的集中化日志平台


       在项目初期的时候,大家都是赶着上线,一般来说对日志没有过多的考虑,当然日志量也不大,所以用log4net就够了,随着应用的越来越多,日志散

落在各个服务器的logs文件夹下,确实有点不大方便,这个时候就想到了,在log4net中配置 mysql的数据源,不过这里面有一个坑,熟悉log4net的同学知

道写入mysql有一个batch的阈值,比如说batchcache中有100条,才写入mysql,这样的话,就有一个延迟的效果,而且如果batchcache中不满100条的话,

你在mysql中是看不到最新的100条日志。。。而且采用中心化的mysql,涉及到tcp传输,其中的性能大家也应该明白,而且mysql没有一个好的日志界面,

只能自己去写UI,所以还还得继续寻找其他的解决方案,也就是本篇的ELK。

 

一:ELK名字解释

    ELK就是ElasticSearch + LogStash + Kibana,这三样搭配起来确实非常不错,先画张图给大家看一下。

 

1. LogStash

     它可以流放到各自的服务器上收集Log日志,通过内置的ElasticSearch插件解析后输出到ES中。

 

2.ElasticSearch

   这是一个基于Lucene的分布式全文搜索框架,可以对logs进行分布式存储,有点像hdfs哈。。。

 

3. Kibana

   所有的log日志都到ElasticSearch之后,我们需要给他展示出来,对吧? 这个时候Kibana就出手了,它可以多维度的展示es中的数据。这也解决了

用mysql存储带来了难以可视化的问题。

 

二:快速搭建

      上面只是名词解释,为了演示,我只在一台centos上面搭建了。

 

1.  官方下载 :https://www.elastic.co/cn/products,在下面这张图上,我们找到对应的三个产品,进行下载就好了。

 

[root@slave1 myapp]# ls
elasticsearch               kafka_2.11-1.0.0.tgz              nginx-1.13.6.tar.gz
elasticsearch-5.6.4.tar.gz  kibana                            node
elasticsearch-head          kibana-5.2.0-linux-x86_64.tar.gz  node-v8.9.1-linux-x64.tar.xz
images                      logstash                          portal
java                        logstash-5.6.3.tar.gz             service
jdk1.8                      logstash-tutorial-dataset         sql
jdk-8u144-linux-x64.tar.gz  nginx
kafka                       nginx-1.13.6
[root@slave1 myapp]# 

 

我这里下载的是elasticsearch 5.6.4,kibana5.2.0 ,logstash5.6.3三个版本。。。然后用 tar -xzvf解压一下。

 

2. logstash配置

     解压完之后,我们到config目录中新建一个logstash.conf配置。

[root@slave1 config]# ls
jvm.options  log4j2.properties  logstash.conf  logstash.yml  startup.options
[root@slave1 config]# pwd
/usr/myapp/logstash/config
[root@slave1 config]# vim logstash.conf

   

     然后做好input ,filter,output三大块, 其中input是吸取logs文件下的所有log后缀的日志文件,filter是一个过滤函数,这里不用配置,output配置了导入到

hosts为127.0.0.1:9200的elasticsearch中,每天一个索引。

input {
     file {
        type => "log"
        path => "/logs/*.log"
        start_position => "beginning"
    }
}

output {
  stdout {
   codec => rubydebug { }
  }

  elasticsearch {
    hosts => "127.0.0.1"
    index => "log-%{+YYYY.MM.dd}"
  }
}

 

配置完了之后,我们就可以到bin目录下启动logstash了,配置文件设置为conf/logstash.conf,从下图中可以看到,当前开启的是9600端口。

[root@slave1 bin]# ls
cpdump             logstash      logstash.lib.sh  logstash-plugin.bat  setup.bat
ingest-convert.sh  logstash.bat  logstash-plugin  ruby                 system-install
[root@slave1 bin]# ./logstash -f ../config/logstash.conf
Sending Logstash"s logs to /usr/myapp/logstash/logs which is now configured via log4j2.properties
[2017-11-28T17:11:53,411][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/myapp/logstash/modules/fb_apache/configuration"}
[2017-11-28T17:11:53,414][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/myapp/logstash/modules/netflow/configuration"}
[2017-11-28T17:11:54,063][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2017-11-28T17:11:54,066][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-11-28T17:11:54,199][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2017-11-28T17:11:54,244][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-11-28T17:11:54,247][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-11-28T17:11:54,265][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
[2017-11-28T17:11:54,266][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-11-28T17:11:54,427][INFO ][logstash.pipeline        ] Pipeline main started
[2017-11-28T17:11:54,493][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

 

3. elasticSearch 

    这个其实也是ELK中的核心,启动的时候一定要注意,因为es不可以进行root账户启动,所以你还需要开启一个elsearch账户。

groupadd elsearch                   #新建elsearch组
useradd elsearch -g elsearch -p elasticsearch  #新建一个elsearch用户
chown -R elsearch:elsearch  ./elasticsearch    #指定elasticsearch所属elsearch组

 

     接下来我们默认启动就好了,什么也不用配置,然后在日志中大概可以看到开启了9200,9300端口。

[elsearch@slave1 bin]$ ./elasticsearch
[2017-11-28T17:19:36,893][INFO ][o.e.n.Node               ] [] initializing ...
[2017-11-28T17:19:36,973][INFO ][o.e.e.NodeEnvironment    ] [0bC8MSi] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [17.9gb], net total_space [27.6gb], spins? [unknown], types [rootfs]
[2017-11-28T17:19:36,974][INFO ][o.e.e.NodeEnvironment    ] [0bC8MSi] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-11-28T17:19:36,982][INFO ][o.e.n.Node               ] node name [0bC8MSi] derived from node ID [0bC8MSi_SUywaqz_Zl-MFA]; set [node.name] to override
[2017-11-28T17:19:36,982][INFO ][o.e.n.Node               ] version[5.6.4], pid[12592], build[8bbedf5/2017-10-31T18:55:38.105Z], OS[Linux/3.10.0-327.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2017-11-28T17:19:36,982][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/myapp/elasticsearch]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [aggs-matrix-stats]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [ingest-common]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [lang-expression]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [lang-groovy]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [lang-mustache]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [lang-painless]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [parent-join]
[2017-11-28T17:19:37,780][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [percolator]
[2017-11-28T17:19:37,781][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [reindex]
[2017-11-28T17:19:37,781][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [transport-netty3]
[2017-11-28T17:19:37,781][INFO ][o.e.p.PluginsService     ] [0bC8MSi] loaded module [transport-netty4]
[2017-11-28T17:19:37,781][INFO ][o.e.p.PluginsService     ] [0bC8MSi] no plugins loaded
[2017-11-28T17:19:39,782][INFO ][o.e.d.DiscoveryModule    ] [0bC8MSi] using discovery type [zen]
[2017-11-28T17:19:40,409][INFO ][o.e.n.Node               ] initialized
[2017-11-28T17:19:40,409][INFO ][o.e.n.Node               ] [0bC8MSi] starting ...
[2017-11-28T17:19:40,539][INFO ][o.e.t.TransportService   ] [0bC8MSi] publish_address {192.168.23.151:9300}, bound_addresses {[::]:9300}
[2017-11-28T17:19:40,549][INFO ][o.e.b.BootstrapChecks    ] [0bC8MSi] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-11-28T17:19:43,638][INFO ][o.e.c.s.ClusterService   ] [0bC8MSi] new_master {0bC8MSi}{0bC8MSi_SUywaqz_Zl-MFA}{xcbC53RVSHajdLop7sdhpA}{192.168.23.151}{192.168.23.151:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-11-28T17:19:43,732][INFO ][o.e.h.n.Netty4HttpServerTransport] [0bC8MSi] publish_address {192.168.23.151:9200}, bound_addresses {[::]:9200}
[2017-11-28T17:19:43,733][INFO ][o.e.n.Node               ] [0bC8MSi] started
[2017-11-28T17:19:43,860][INFO ][o.e.g.GatewayService     ] [0bC8MSi] recovered [1] indices into cluster_state
[2017-11-28T17:19:44,035][INFO ][o.e.c.r.a.AllocationService] [0bC8MSi] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

 

4. kibana

    它的配置也非常简单,你需要在kibana.yml文件中指定一下你需要读取的elasticSearch地址和可供外网访问的bind地址就可以了。

[root@slave1 config]# pwd
/usr/myapp/kibana/config

[root@slave1 config]# vim kibana.yml

elasticsearch.url: "http://localhost:9200"
server.host: 0.0.0.0

   

      然后就是启动,从日志中可以看出,当前开了5601端口。

[root@slave1 kibana]# cd bin
[root@slave1 bin]# ls
kibana  kibana-plugin  nohup.out
[root@slave1 bin]# ./kibana
  log   [01:23:27.650] [info][status][plugin:kibana@5.2.0] Status changed from uninitialized to green - Ready
  log   [01:23:27.748] [info][status][plugin:elasticsearch@5.2.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [01:23:27.786] [info][status][plugin:console@5.2.0] Status changed from uninitialized to green - Ready
  log   [01:23:27.794] [warning] You"re running Kibana 5.2.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v5.6.4 @ 192.168.23.151:9200 (192.168.23.151)
  log   [01:23:27.811] [info][status][plugin:elasticsearch@5.2.0] Status changed from yellow to green - Kibana index ready
  log   [01:23:28.250] [info][status][plugin:timelion@5.2.0] Status changed from uninitialized to green - Ready
  log   [01:23:28.255] [info][listening] Server running at http://0.0.0.0:5601
  log   [01:23:28.259] [info][status][ui settings] Status changed from uninitialized to green - Ready

 

5. 浏览器中输入:http://192.168.23.151:5601/ 你就可以打开kibana页面了,,默认让我指定一个查看的Index。

 

     接下来我们在本机的/logs文件夹下创建一个简单的1.log文件,内容为“hello world”,然后在kibana上将logstash-*  改成 log* ,Create按钮就会自动出来。

[root@slave1 logs]# echo "hello world" > 1.log

 

  进入之后,点击Discover,你就可以找到你输入的内容啦~~~~ 是不是很帅气。。。

 

如果你装了head安装包,你还可以看到它确实带了日期模式的Index索引,还自带5个默认分片数。

 

好了,本篇就说这么多吧,希望对你有帮助。

当前文章:http://0477auto.com/ask/question_39087.html

发布时间:2018-11-20 02:54:03

鼠宝赚钱 米赚是骗人的 苹果手机屏幕多少钱 靠电脑赚钱的人 微投资平台哪个靠谱 下班后可做的兼职 济南网上兼职 医学生兼职

编辑:侯文文邓

我要说两句: (0人参与)

发布