国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

9、Elasticsearch7.6.1 ES與HDFS相互轉(zhuǎn)存數(shù)據(jù)-ES-Hadoop

這篇具有很好參考價值的文章主要介紹了9、Elasticsearch7.6.1 ES與HDFS相互轉(zhuǎn)存數(shù)據(jù)-ES-Hadoop。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

Elasticsearch 系列文章

1、介紹lucene的功能以及建立索引、搜索單詞、搜索詞語和搜索句子四個示例實現(xiàn)
2、Elasticsearch7.6.1基本介紹、2種部署方式及驗證、head插件安裝、分詞器安裝及驗證
3、Elasticsearch7.6.1信息搜索示例(索引操作、數(shù)據(jù)操作-添加、刪除、導入等、數(shù)據(jù)搜索及分頁)
4、Elasticsearch7.6.1 Java api操作ES(CRUD、兩種分頁方式、高亮顯示)和Elasticsearch SQL詳細示例
5、Elasticsearch7.6.1 filebeat介紹及收集kafka日志到es示例
6、Elasticsearch7.6.1、logstash、kibana介紹及綜合示例(ELK、grok插件)
7、Elasticsearch7.6.1收集nginx日志及監(jiān)測指標示例
8、Elasticsearch7.6.1收集mysql慢查詢?nèi)罩炯氨O(jiān)控
9、Elasticsearch7.6.1 ES與HDFS相互轉(zhuǎn)存數(shù)據(jù)-ES-Hadoop



本文簡單的介紹了ES-hadoop組件功能使用,即通過ES-hadoop實現(xiàn)相互數(shù)據(jù)寫入示例。
本文依賴es環(huán)境、hadoop環(huán)境好用。
本文分為三部分,即ES-hadoop介紹、ES數(shù)據(jù)寫入hadoop和hadoop數(shù)據(jù)寫入ES。

一、ES-Hadoop介紹

ES-Hadoop是Elasticsearch推出的專門用于對接Hadoop生態(tài)的工具,可以讓數(shù)據(jù)在Elasticsearch和Hadoop之間雙向移動,無縫銜接Elasticsearch與Hadoop服務(wù),充分使用Elasticsearch的快速搜索及Hadoop批處理能力,實現(xiàn)交互式數(shù)據(jù)處理。

本文介紹如何通過ES-Hadoop實現(xiàn)Hadoop的Hive服務(wù)讀寫Elasticsearch數(shù)據(jù)。

Hadoop生態(tài)的優(yōu)勢是處理大規(guī)模數(shù)據(jù)集,但是其缺點也很明顯,就是當用于交互式分析時,查詢時延會比較長。而Elasticsearch擅長于交互式分析,對于很多查詢類型,特別是對于Ad-hoc查詢(即席查詢),可以達到秒級。ES-Hadoop的推出提供了一種組合兩者優(yōu)勢的可能性。使用ES-Hadoop,您只需要對代碼進行很小的改動,即可快速處理存儲在Elasticsearch中的數(shù)據(jù),并且能夠享受到Elasticsearch帶來的加速效果。

ES-Hadoop的原理是將Elasticsearch作為MR、Spark或Hive等數(shù)據(jù)處理引擎的數(shù)據(jù)源,在計算存儲分離的架構(gòu)中扮演存儲的角色。這和 MR、Spark或Hive的數(shù)據(jù)源并無差異,但相對于這些數(shù)據(jù)源,Elasticsearch具有更快的數(shù)據(jù)選擇過濾能力。這種能力正是分析引擎最為關(guān)鍵的能力之一。

es hadoop hdfs,大數(shù)據(jù)相關(guān)組件介紹,hadoop,elasticsearch,hdfs,es-hadoop,日志分析,全文檢索,es數(shù)據(jù)寫入hdfs

二、ES寫入HDFS

假設(shè)es中已經(jīng)存儲具體索引數(shù)據(jù),下面僅僅是將es的數(shù)據(jù)讀取并存入hdfs中。
es hadoop hdfs,大數(shù)據(jù)相關(guān)組件介紹,hadoop,elasticsearch,hdfs,es-hadoop,日志分析,全文檢索,es數(shù)據(jù)寫入hdfs
es hadoop hdfs,大數(shù)據(jù)相關(guān)組件介紹,hadoop,elasticsearch,hdfs,es-hadoop,日志分析,全文檢索,es數(shù)據(jù)寫入hdfs

1、txt文件格式寫入

1)、pom.xml

	<dependency>
        <groupId>org.elasticsearch</groupId>
        <artifactId>elasticsearch-hadoop</artifactId>
        <version>7.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.elasticsearch.client</groupId>
        <artifactId>transport</artifactId>
        <version>7.6.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-common</artifactId>
        <version>3.1.4</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-client</artifactId>
        <version>3.1.4</version>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>3.1.4</version>
    </dependency>
    <dependency>
        <groupId>jdk.tools</groupId>
        <artifactId>jdk.tools</artifactId>
        <version>1.8</version>
        <scope>system</scope>
        <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
    </dependency>
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-mapreduce-client-core</artifactId>
        <version>3.1.4</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/commons-httpclient/commons-httpclient -->
    <dependency>
        <groupId>commons-httpclient</groupId>
        <artifactId>commons-httpclient</artifactId>
        <version>3.1</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
    <dependency>
        <groupId>com.google.code.gson</groupId>
        <artifactId>gson</artifactId>
        <version>2.10.1</version>
    </dependency>

2)、示例1:order_idx

將ES中的order_idx索引數(shù)據(jù)存儲至hdfs中,其中hdfs是HA。hdfs中是以txt形式存儲的,其中數(shù)據(jù)用逗號隔離

  • 其數(shù)據(jù)結(jié)構(gòu)
key 5000,value {status=已付款, pay_money=3820.0, payway=3, userid=4405460,operation_date=2020-04-25 12:09:51, category=維修;手機;}
  • 實現(xiàn)
import java.io.IOException;
import java.util.Iterator;
import java.util.Map;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsInputFormat;
import org.elasticsearch.hadoop.mr.LinkedMapWritable;

import lombok.Data;
import lombok.extern.slf4j.Slf4j;

@Slf4j
public class ESToHdfs extends Configured implements Tool {
    private static String out = "/ES-Hadoop/test";

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int status = ToolRunner.run(conf, new ESToHdfs(), args);
        System.exit(status);
    }

    static class ESToHdfsMapper extends Mapper<Text, LinkedMapWritable, NullWritable, Text> {
        Text outValue = new Text();

        protected void map(Text key, LinkedMapWritable value, Context context) throws IOException, InterruptedException {
//            log.info("key {} , value {}", key.toString(), value);

            Order order = new Order();
//            order.setId(Integer.parseInt(key.toString()));

            Iterator it = value.entrySet().iterator();
            order.setId(key.toString());
            String name = null;
            String data = null;
            while (it.hasNext()) {
                Map.Entry entry = (Map.Entry) it.next();
                name = entry.getKey().toString();
                data = entry.getValue().toString();
                switch (name) {
                case "userid":
                    order.setUserid(Integer.parseInt(data));
                    break;
                case "operation_date":
                    order.setOperation_date(data);
                    break;
                case "category":
                    order.setCategory(data);
                    break;
                case "pay_money":
                    order.setPay_money(Double.parseDouble(data));
                    break;
                case "status":
                    order.setStatus(data);
                    break;
                case "payway":
                    order.setPayway(data);
                    break;
                }
            }
            //log.info("order={}", order);

            outValue.set(order.toString());
            context.write(NullWritable.get(), outValue);
        }
    }

    @Data
    static class Order {
        // key 5000 value {status=已付款, pay_money=3820.0, payway=3, userid=4405460, operation_date=2020-04-25 12:09:51, category=維修;手機;}
        private String id;
        private int userid;
        private String operation_date;
        private String category;
        private double pay_money;
        private String status;
        private String payway;

        public String toString() {
            return new StringBuilder(id).append(",").append(userid).append(",").append(operation_date).append(",").append(category).append(",").append(pay_money).append(",")
                    .append(status).append(",").append(payway).toString();
        }
    }

    @Override
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
        conf.set("dfs.nameservices", "HadoopHAcluster");
        conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
        conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
        System.setProperty("HADOOP_USER_NAME", "alanchan");

        conf.setBoolean("mapred.map.tasks.speculative.execution", false);
        conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
        conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
        // ElaticSearch 索引名稱
        conf.set("es.resource", "order_idx");
        // 查詢索引中的所有數(shù)據(jù),也可以加條件          
        conf.set("es.query", "{       \"query\": {\"match_all\": {  }}}");

        Job job = Job.getInstance(conf, ESToHdfs.class.getName());
        // 設(shè)置作業(yè)驅(qū)動類
        job.setJarByClass(ESToHdfs.class);
        // 設(shè)置ES的輸入類型
        job.setInputFormatClass(EsInputFormat.class);

        job.setMapperClass(ESToHdfsMapper.class);

        job.setMapOutputKeyClass(NullWritable.class);
        job.setMapOutputValueClass(Text.class);

        FileOutputFormat.setOutputPath(job, new Path(out));

        job.setNumReduceTasks(0);

        return job.waitForCompletion(true) ? 0 : 1;
    }
}
  • 驗證
    結(jié)果如下圖
    es hadoop hdfs,大數(shù)據(jù)相關(guān)組件介紹,hadoop,elasticsearch,hdfs,es-hadoop,日志分析,全文檢索,es數(shù)據(jù)寫入hdfs

3)、示例2:tomcat_log_2023-03

將ES中的tomcat_log_2023-03索引數(shù)據(jù)存儲至hdfs中,其中hdfs是HA。hdfs中是以txt形式存儲的,其中數(shù)據(jù)用逗號隔離

  • 其數(shù)據(jù)結(jié)構(gòu)
key Uzm_44YBH2rQ2w9r5vqK , value {message=2023-03-15 13:30:00.001 [schedulerJobAllTask_Worker-1] INFO c.o.d.s.t.QuartzTask.executeAllTaskList-{37} - 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******, tags=[_dateparsefailure], class=c.o.d.s.t.QuartzTask.executeAllTaskList-{37}, level=INFO, date=2023-03-15 13:30:00.001, thread=schedulerJobAllTask_Worker-1, fields={source=catalina}, @timestamp=2023-03-15T05:30:06.812Z, log={file={path=/opt/apache-tomcat-9.0.43/logs/catalina.out}, offset=76165371}, info=- 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******}
  • 實現(xiàn)
import java.io.IOException;
import java.util.Iterator;
import java.util.Map;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsInputFormat;
import org.elasticsearch.hadoop.mr.LinkedMapWritable;

import lombok.Data;
import lombok.extern.slf4j.Slf4j;

@Slf4j
public class ESToHdfs2 extends Configured implements Tool {
    private static String out = "/ES-Hadoop/tomcatlog";

    static class ESToHdfs2Mapper extends Mapper<Text, LinkedMapWritable, NullWritable, Text> {
        Text outValue = new Text();

        protected void map(Text key, LinkedMapWritable value, Context context) throws IOException, InterruptedException {
//            log.info("key {} , value {}", key.toString(), value);
            TomcatLog tLog = new TomcatLog();
            Iterator it = value.entrySet().iterator();
            String name = null;
            String data = null;
            while (it.hasNext()) {
                Map.Entry entry = (Map.Entry) it.next();
                name = entry.getKey().toString();
                data = entry.getValue().toString();
                switch (name) {
                case "date":
                    tLog.setDate(data.replace('/', '-'));
                    break;
                case "thread":
                    tLog.setThread(data);
                    break;
                case "level":
                    tLog.setLogLevel(data);
                    break;
                case "class":
                    tLog.setClazz(data);
                    break;
                case "info":
                    tLog.setLogMsg(data);
                    break;
                }
            }
            outValue.set(tLog.toString());
            context.write(NullWritable.get(), outValue);
        }
    }

    @Data
    static class TomcatLog {
        private String date;
        private String thread;
        private String logLevel;
        private String clazz;
        private String logMsg;

        public String toString() {
            return new StringBuilder(date).append(",").append(thread).append(",").append(logLevel).append(",").append(clazz).append(",").append(logMsg).toString();
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int status = ToolRunner.run(conf, new ESToHdfs2(), args);
        System.exit(status);
    }

    @Override
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
        conf.set("dfs.nameservices", "HadoopHAcluster");
        conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
        conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
        System.setProperty("HADOOP_USER_NAME", "alanchan");

        conf.setBoolean("mapred.map.tasks.speculative.execution", false);
        conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
        conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
        // ElaticSearch 索引名稱
        conf.set("es.resource", "tomcat_log_2023-03");
        conf.set("es.query", "{\"query\":{\"bool\":{\"must\":[{\"match_all\":{}}],\"must_not\":[],\"should\":[]}},\"from\":0,\"size\":10,\"sort\":[],\"aggs\":{}}");

        Job job = Job.getInstance(conf, ESToHdfs2.class.getName());
        // 設(shè)置作業(yè)驅(qū)動類
        job.setJarByClass(ESToHdfs2.class);
        job.setInputFormatClass(EsInputFormat.class);

        job.setMapperClass(ESToHdfs2Mapper.class);

        job.setMapOutputKeyClass(NullWritable.class);
        job.setMapOutputValueClass(Text.class);

        FileOutputFormat.setOutputPath(job, new Path(out));

        job.setNumReduceTasks(0);

        return job.waitForCompletion(true) ? 0 : 1;
    }

}
  • 驗證
    es hadoop hdfs,大數(shù)據(jù)相關(guān)組件介紹,hadoop,elasticsearch,hdfs,es-hadoop,日志分析,全文檢索,es數(shù)據(jù)寫入hdfs

2、json文件格式寫入

將ES中的tomcat_log_2023-03索引數(shù)據(jù)存儲至hdfs中,其中hdfs是HA。hdfs中是以json形式存儲的

  • 其數(shù)據(jù)結(jié)構(gòu)
key Uzm_44YBH2rQ2w9r5vqK , value {message=2023-03-15 13:30:00.001 [schedulerJobAllTask_Worker-1] INFO c.o.d.s.t.QuartzTask.executeAllTaskList-{37} - 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******, tags=[_dateparsefailure], class=c.o.d.s.t.QuartzTask.executeAllTaskList-{37}, level=INFO, date=2023-03-15 13:30:00.001, thread=schedulerJobAllTask_Worker-1, fields={source=catalina}, @timestamp=2023-03-15T05:30:06.812Z, log={file={path=/opt/apache-tomcat-9.0.43/logs/catalina.out}, offset=76165371}, info=- 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******}
  • 實現(xiàn)
import java.io.IOException;
import java.util.Iterator;
import java.util.Map;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsInputFormat;
import org.elasticsearch.hadoop.mr.LinkedMapWritable;

import com.google.gson.Gson;

import lombok.Data;

  
public class ESToHdfsByJson extends Configured implements Tool {
    private static String out = "/ES-Hadoop/tomcatlog_json";

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int status = ToolRunner.run(conf, new ESToHdfsByJson(), args);
        System.exit(status);
    }

    @Override
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
        conf.set("dfs.nameservices", "HadoopHAcluster");
        conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
        conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
        System.setProperty("HADOOP_USER_NAME", "alanchan");

        conf.setBoolean("mapred.map.tasks.speculative.execution", false);
        conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
        conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
        // ElaticSearch 索引名稱
        conf.set("es.resource", "tomcat_log_2023-03");
        conf.set("es.query", "{\"query\":{\"bool\":{\"must\":[{\"match_all\":{}}],\"must_not\":[],\"should\":[]}},\"from\":0,\"size\":10,\"sort\":[],\"aggs\":{}}");

        Job job = Job.getInstance(conf, ESToHdfs2.class.getName());
        // 設(shè)置作業(yè)驅(qū)動類
        job.setJarByClass(ESToHdfsByJson.class);
        job.setInputFormatClass(EsInputFormat.class);

        job.setMapperClass(ESToHdfsByJsonMapper.class);

        job.setMapOutputKeyClass(NullWritable.class);
        job.setMapOutputValueClass(Text.class);

        FileOutputFormat.setOutputPath(job, new Path(out));

        job.setNumReduceTasks(0);

        return job.waitForCompletion(true) ? 0 : 1;
    }

    static class ESToHdfsByJsonMapper extends Mapper<Text, LinkedMapWritable, NullWritable, Text> {
        Text outValue = new Text();
        private Gson gson = new Gson();
        
        protected void map(Text key, LinkedMapWritable value, Context context) throws IOException, InterruptedException {
//            log.info("key {} , value {}", key.toString(), value);
            TomcatLog tLog = new TomcatLog();
//            tLog.setId(key.toString());
            Iterator it = value.entrySet().iterator();
            String name = null;
            String data = null;
            while (it.hasNext()) {
                Map.Entry entry = (Map.Entry) it.next();
                name = entry.getKey().toString();
                data = entry.getValue().toString();
                switch (name) {
                case "date":
                    tLog.setDate(data.replace('/', '-'));
                    break;
                case "thread":
                    tLog.setThread(data);
                    break;
                case "level":
                    tLog.setLogLevel(data);
                    break;
                case "class":
                    tLog.setClazz(data);
                    break;
                case "info":
                    tLog.setLogMsg(data);
                    break;
                }
            }
            outValue.set(gson.toJson(tLog));
            context.write(NullWritable.get(), outValue);
        }

    }

    @Data
    static class TomcatLog {
//        private String id ;
        private String date;
        private String thread;
        private String logLevel;
        private String clazz;
        private String logMsg;

    }
}
  • 驗證
    es hadoop hdfs,大數(shù)據(jù)相關(guān)組件介紹,hadoop,elasticsearch,hdfs,es-hadoop,日志分析,全文檢索,es數(shù)據(jù)寫入hdfs

三、HDFS寫入ES

本示例以上述例子中存入hdfs的數(shù)據(jù)。經(jīng)過測試,ES只能將json的數(shù)據(jù)導入。
pom.xml參考上述例子

1、txt文件寫入

先將數(shù)據(jù)轉(zhuǎn)換成json后存入

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsOutputFormat;

import com.google.gson.Gson;

import lombok.Data;
import lombok.extern.slf4j.Slf4j;


@Slf4j
public class HdfsTxtDataToES extends Configured implements Tool {
    private static String out = "/ES-Hadoop/tomcatlog";

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int status = ToolRunner.run(conf, new HdfsTxtDataToES(), args);
        System.exit(status);
    }

    @Data
    static class TomcatLog {
        private String date;
        private String thread;
        private String logLevel;
        private String clazz;
        private String logMsg;
    }

    @Override
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
        conf.set("dfs.nameservices", "HadoopHAcluster");
        conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
        conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
        System.setProperty("HADOOP_USER_NAME", "alanchan");

        conf.setBoolean("mapred.map.tasks.speculative.execution", false);
        conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
        conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");

        // ElaticSearch 索引名稱,可以不提前創(chuàng)建
        conf.set("es.resource", "tomcat_log_2024");
        // Hadoop上的數(shù)據(jù)格式為JSON,可以直接導入
        conf.set("es.input.json", "yes");

        Job job = Job.getInstance(conf, HdfsTxtDataToES.class.getName());
        // 設(shè)置作業(yè)驅(qū)動類
        job.setJarByClass(HdfsTxtDataToES.class);
        // 設(shè)置EsOutputFormat
        job.setOutputFormatClass(EsOutputFormat.class);

        job.setMapperClass(HdfsTxtDataToESMapper.class);

        job.setMapOutputKeyClass(NullWritable.class);
        job.setMapOutputValueClass(Text.class);

        FileInputFormat.setInputPaths(job, new Path(out));

        job.setNumReduceTasks(0);

        return job.waitForCompletion(true) ? 0 : 1;
    }

    static class HdfsTxtDataToESMapper extends Mapper<LongWritable, Text, NullWritable, Text> {
        Text outValue = new Text();
        TomcatLog tLog = new TomcatLog();
        Gson gson = new Gson();

        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            log.info("key={},value={}", key, value);
            // date:2023-03-13 17:33:00.001,
            // thread:schedulerJobAllTask_Worker-1,
            // loglevel:INFO,
            // clazz:o.q.c.QuartzScheduler.start-{461},
            // logMsg:- Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
            String[] lines = value.toString().split(",");
            tLog.setDate(lines[0]);
            tLog.setThread(lines[1]);
            tLog.setLogLevel(lines[2]);
            tLog.setClazz(lines[3]);
            tLog.setLogMsg(lines[4]);
            outValue.set(gson.toJson(tLog));
            context.write(NullWritable.get(), outValue);
        }
    }
}

2、json文件寫入

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsOutputFormat;

import lombok.extern.slf4j.Slf4j;

@Slf4j
public class HdfsJsonDataToES extends Configured implements Tool {
    private static String out = "/ES-Hadoop/tomcatlog_json";

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        int status = ToolRunner.run(conf, new HdfsJsonDataToES(), args);
        System.exit(status);
    }

    @Override
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
        conf.set("dfs.nameservices", "HadoopHAcluster");
        conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
        conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
        conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
        System.setProperty("HADOOP_USER_NAME", "alanchan");

        conf.setBoolean("mapred.map.tasks.speculative.execution", false);
        conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
        conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");

        // ElaticSearch 索引名稱,可以不提前創(chuàng)建
        conf.set("es.resource", "tomcat_log_2023");
        //Hadoop上的數(shù)據(jù)格式為JSON,可以直接導入
        conf.set("es.input.json", "yes");
        
        Job job = Job.getInstance(conf, HdfsJsonDataToES.class.getName());
        // 設(shè)置作業(yè)驅(qū)動類
        job.setJarByClass(HdfsJsonDataToES.class);
        job.setOutputFormatClass(EsOutputFormat.class);

        job.setMapperClass(HdfsJsonDataToESMapper.class);

        job.setMapOutputKeyClass(NullWritable.class);
        job.setMapOutputValueClass(Text.class);

        FileInputFormat.setInputPaths(job, new Path(out));

        job.setNumReduceTasks(0);

        return job.waitForCompletion(true) ? 0 : 1;
    }

    static class HdfsJsonDataToESMapper extends Mapper<LongWritable, Text, NullWritable, Text> {
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            log.info("key={},value={}",key,value);
            context.write(NullWritable.get(), value);
        }
    }
}

以上,簡單的介紹了ES-hadoop組件功能使用,即通過ES-hadoop實現(xiàn)相互數(shù)據(jù)寫入示例。文章來源地址http://www.zghlxwxcb.cn/news/detail-538510.html

到了這里,關(guān)于9、Elasticsearch7.6.1 ES與HDFS相互轉(zhuǎn)存數(shù)據(jù)-ES-Hadoop的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔相關(guān)法律責任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • [Java Framework] [ELK] Spring 整合ES (ElasticSearch7.15.x +)

    [Java Framework] [ELK] Spring 整合ES (ElasticSearch7.15.x +)

    ElasticSearch7.15.x 版本后,廢棄了高級Rest客戶端的功能 2.1 配置文件 2.2 配置類 3.1 索引的相關(guān)操作 3.2 實體映射相關(guān)操作 3.2.1 創(chuàng)建實體類 3.2.2 Doc實體操作API 3.3 聚合相關(guān)操作 3.3.1 創(chuàng)建實體類 3.3.2 創(chuàng)建操作類 [1] Elasticsearch Clients [2] Elasticsearch Clients - Aggregations

    2023年04月08日
    瀏覽(24)
  • Elasticsearch7.15.2 安裝ik中文分詞器后啟動ES服務(wù)報錯的解決辦法

    Elasticsearch7.15.2 安裝ik中文分詞器后啟動ES服務(wù)報錯的解決辦法

    下載elasticsearch ik中文分詞器,在elasticsearch安裝目錄下的plugins文件夾下創(chuàng)建名為ik的文件夾,將下載的ik中文分詞器解壓縮到新建的ik文件夾下,再次運行 ./bin/elasticsearch啟動ES服務(wù)時出現(xiàn)以下錯誤: Exception in thread \\\"main\\\" java.nio.file.NotDirectoryException: /Users/amelia/work/elasticsearch-7.1

    2024年02月12日
    瀏覽(34)
  • Logstash應用-同步ES(elasticsearch)到HDFS

    Logstash應用-同步ES(elasticsearch)到HDFS

    現(xiàn)有需求需要將elasticsearch的備份至hdfs存儲,根據(jù)以上需求,使用logstash按照天級別進行數(shù)據(jù)的同步 數(shù)據(jù)采集存在時間漂移問題,數(shù)據(jù)保存時使用的是采集時間而不是數(shù)據(jù)生成時間 采用webhdfs無法對文件大小進行設(shè)置 解決@timestamp時區(qū)問題 logstash時間處理官網(wǎng):https://www.elasti

    2024年01月16日
    瀏覽(27)
  • elasticsearch7.5.2 數(shù)據(jù)遷移解決方案

    elasticsearch7.5.2 數(shù)據(jù)遷移解決方案

    1. 遷移舊數(shù)據(jù) a. 查看ES數(shù)據(jù)文件掛載目錄位置 容器內(nèi)路徑: /usr/share/elasticsearch/data 如果沒有掛載,需要將/usr/share/elasticsearch/data 壓縮后,文件拷到宿主機上 bash # docker cp [容器名稱:文件路徑] [宿主機路徑] b. 將data.tar.gz 上傳至B服務(wù)器 遷移es數(shù)據(jù)文件至B服務(wù)器 /root 下 新增

    2024年02月12日
    瀏覽(16)
  • 本地部署Canal筆記-實現(xiàn)MySQL與ElasticSearch7數(shù)據(jù)同步

    本地部署Canal筆記-實現(xiàn)MySQL與ElasticSearch7數(shù)據(jù)同步

    本地搭建canal實現(xiàn)mysql數(shù)據(jù)到es的簡單的數(shù)據(jù)同步,僅供學習參考 建議首先熟悉一下canal同步方式:https://github.com/alibaba/canal/wiki 本地搭建MySQL數(shù)據(jù)庫 本地搭建ElasticSearch 本地搭建canal-server 本地搭建canal-adapter 本地環(huán)境為window11,大部分組件采用docker進行部署,MySQL采用8.0.27, 推薦

    2024年02月02日
    瀏覽(96)
  • datax 同步mongodb數(shù)據(jù)庫到hive(hdfs)和elasticserch(es)

    1.mongodb版本:3.6.3。(有點老了,后來發(fā)現(xiàn)flinkcdc都只能監(jiān)控一張表,多張表無法監(jiān)控) 2.datax版本:自己編譯的DataX-datax_v202210 3.hdfs版本:3.1.3 4.hive版本:3.1.2 1.增量數(shù)據(jù):需要每隔1小時將mongodb中17個集合的數(shù)據(jù)同步至hive,因為有數(shù)據(jù)生成時間,才用datax查詢方式,將上一個

    2023年04月23日
    瀏覽(92)
  • Docker安裝ElasticSearch7.14.0 docker安裝elasticsearch7.14.0完整詳細教程

    Docker安裝ElasticSearch7.14.0 docker安裝elasticsearch7.14.0完整詳細教程

    Docker常用命令大全 Docker ElasticSearch 官方倉庫 我這邊選擇的版本是 7.14.0 ,這個版本比較安全可靠,在終端中執(zhí)行以下命令以拉取 docker pull elasticsearch:7.14.0 使用以下命令創(chuàng)建一個新的 rabbitmq容器并將其啟動: --name 是 容器別名 將 宿主機 9200 端口映射到 容器內(nèi) 9200 and 端口 93

    2024年02月13日
    瀏覽(53)
  • 【elasticsearch】elasticsearch7.x集群搭建

    【elasticsearch】elasticsearch7.x集群搭建

    目錄 一、服務(wù)器情況 二、安裝前準備 1、下載es 2、配置服務(wù)器免密登錄 3、升級jdk 三、安裝es集群 (一)master服務(wù)器的操作 1、將es上傳到Linux并解壓 2、創(chuàng)建數(shù)據(jù)、日志存儲文件夾 3、配置config/elasticsearch.yml 4、配置jvm 5、創(chuàng)建es用戶 6、賦權(quán)限 7、修改配置文件 8、啟動服務(wù) 9、啟

    2023年04月09日
    瀏覽(23)
  • centos安裝elasticsearch7.9

    centos安裝elasticsearch7.9

    下載地址如下,版本號可以替換成自己想要的。 這里需要注意一點,需要根據(jù)你服務(wù)器的內(nèi)核來進行選擇,如下圖所示,我就應該選擇aarch64版本的,否則運行的時候會報錯cannot execute binary file: Exec format error https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.3-linux-aarch64.

    2024年02月12日
    瀏覽(24)
  • elasticsearch7.8.0集群搭建

    名稱 地址 ElasticSearch7.8 https://mirrors.huaweicloud.com/home(華為開源鏡像站) Logstash7.8 https://mirrors.huaweicloud.com/home(華為開源鏡像站) Kibana7.8 https://mirrors.huaweicloud.com/home(華為開源鏡像站 jdk1.8 https://www.oracle.com/java 自行安裝jdk及配置環(huán)境變量 解壓 創(chuàng)建數(shù)據(jù)存放地址和日志存放地

    2024年02月02日
    瀏覽(25)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包