Elasticsearch 系列文章
1、介紹lucene的功能以及建立索引、搜索單詞、搜索詞語和搜索句子四個示例實現(xiàn)
2、Elasticsearch7.6.1基本介紹、2種部署方式及驗證、head插件安裝、分詞器安裝及驗證
3、Elasticsearch7.6.1信息搜索示例(索引操作、數(shù)據(jù)操作-添加、刪除、導入等、數(shù)據(jù)搜索及分頁)
4、Elasticsearch7.6.1 Java api操作ES(CRUD、兩種分頁方式、高亮顯示)和Elasticsearch SQL詳細示例
5、Elasticsearch7.6.1 filebeat介紹及收集kafka日志到es示例
6、Elasticsearch7.6.1、logstash、kibana介紹及綜合示例(ELK、grok插件)
7、Elasticsearch7.6.1收集nginx日志及監(jiān)測指標示例
8、Elasticsearch7.6.1收集mysql慢查詢?nèi)罩炯氨O(jiān)控
9、Elasticsearch7.6.1 ES與HDFS相互轉(zhuǎn)存數(shù)據(jù)-ES-Hadoop
本文簡單的介紹了ES-hadoop組件功能使用,即通過ES-hadoop實現(xiàn)相互數(shù)據(jù)寫入示例。
本文依賴es環(huán)境、hadoop環(huán)境好用。
本文分為三部分,即ES-hadoop介紹、ES數(shù)據(jù)寫入hadoop和hadoop數(shù)據(jù)寫入ES。
一、ES-Hadoop介紹
ES-Hadoop是Elasticsearch推出的專門用于對接Hadoop生態(tài)的工具,可以讓數(shù)據(jù)在Elasticsearch和Hadoop之間雙向移動,無縫銜接Elasticsearch與Hadoop服務(wù),充分使用Elasticsearch的快速搜索及Hadoop批處理能力,實現(xiàn)交互式數(shù)據(jù)處理。
本文介紹如何通過ES-Hadoop實現(xiàn)Hadoop的Hive服務(wù)讀寫Elasticsearch數(shù)據(jù)。
Hadoop生態(tài)的優(yōu)勢是處理大規(guī)模數(shù)據(jù)集,但是其缺點也很明顯,就是當用于交互式分析時,查詢時延會比較長。而Elasticsearch擅長于交互式分析,對于很多查詢類型,特別是對于Ad-hoc查詢(即席查詢),可以達到秒級。ES-Hadoop的推出提供了一種組合兩者優(yōu)勢的可能性。使用ES-Hadoop,您只需要對代碼進行很小的改動,即可快速處理存儲在Elasticsearch中的數(shù)據(jù),并且能夠享受到Elasticsearch帶來的加速效果。
ES-Hadoop的原理是將Elasticsearch作為MR、Spark或Hive等數(shù)據(jù)處理引擎的數(shù)據(jù)源,在計算存儲分離的架構(gòu)中扮演存儲的角色。這和 MR、Spark或Hive的數(shù)據(jù)源并無差異,但相對于這些數(shù)據(jù)源,Elasticsearch具有更快的數(shù)據(jù)選擇過濾能力。這種能力正是分析引擎最為關(guān)鍵的能力之一。
二、ES寫入HDFS
假設(shè)es中已經(jīng)存儲具體索引數(shù)據(jù),下面僅僅是將es的數(shù)據(jù)讀取并存入hdfs中。
1、txt文件格式寫入
1)、pom.xml
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>7.6.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>3.1.4</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>3.1.4</version>
</dependency>
<dependency>
<groupId>jdk.tools</groupId>
<artifactId>jdk.tools</artifactId>
<version>1.8</version>
<scope>system</scope>
<systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>3.1.4</version>
</dependency>
<!-- https://mvnrepository.com/artifact/commons-httpclient/commons-httpclient -->
<dependency>
<groupId>commons-httpclient</groupId>
<artifactId>commons-httpclient</artifactId>
<version>3.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.10.1</version>
</dependency>
2)、示例1:order_idx
將ES中的order_idx索引數(shù)據(jù)存儲至hdfs中,其中hdfs是HA。hdfs中是以txt形式存儲的,其中數(shù)據(jù)用逗號隔離
- 其數(shù)據(jù)結(jié)構(gòu)
key 5000,value {status=已付款, pay_money=3820.0, payway=3, userid=4405460,operation_date=2020-04-25 12:09:51, category=維修;手機;}
- 實現(xiàn)
import java.io.IOException;
import java.util.Iterator;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsInputFormat;
import org.elasticsearch.hadoop.mr.LinkedMapWritable;
import lombok.Data;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class ESToHdfs extends Configured implements Tool {
private static String out = "/ES-Hadoop/test";
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
int status = ToolRunner.run(conf, new ESToHdfs(), args);
System.exit(status);
}
static class ESToHdfsMapper extends Mapper<Text, LinkedMapWritable, NullWritable, Text> {
Text outValue = new Text();
protected void map(Text key, LinkedMapWritable value, Context context) throws IOException, InterruptedException {
// log.info("key {} , value {}", key.toString(), value);
Order order = new Order();
// order.setId(Integer.parseInt(key.toString()));
Iterator it = value.entrySet().iterator();
order.setId(key.toString());
String name = null;
String data = null;
while (it.hasNext()) {
Map.Entry entry = (Map.Entry) it.next();
name = entry.getKey().toString();
data = entry.getValue().toString();
switch (name) {
case "userid":
order.setUserid(Integer.parseInt(data));
break;
case "operation_date":
order.setOperation_date(data);
break;
case "category":
order.setCategory(data);
break;
case "pay_money":
order.setPay_money(Double.parseDouble(data));
break;
case "status":
order.setStatus(data);
break;
case "payway":
order.setPayway(data);
break;
}
}
//log.info("order={}", order);
outValue.set(order.toString());
context.write(NullWritable.get(), outValue);
}
}
@Data
static class Order {
// key 5000 value {status=已付款, pay_money=3820.0, payway=3, userid=4405460, operation_date=2020-04-25 12:09:51, category=維修;手機;}
private String id;
private int userid;
private String operation_date;
private String category;
private double pay_money;
private String status;
private String payway;
public String toString() {
return new StringBuilder(id).append(",").append(userid).append(",").append(operation_date).append(",").append(category).append(",").append(pay_money).append(",")
.append(status).append(",").append(payway).toString();
}
}
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
conf.set("dfs.nameservices", "HadoopHAcluster");
conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
System.setProperty("HADOOP_USER_NAME", "alanchan");
conf.setBoolean("mapred.map.tasks.speculative.execution", false);
conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
// ElaticSearch 索引名稱
conf.set("es.resource", "order_idx");
// 查詢索引中的所有數(shù)據(jù),也可以加條件
conf.set("es.query", "{ \"query\": {\"match_all\": { }}}");
Job job = Job.getInstance(conf, ESToHdfs.class.getName());
// 設(shè)置作業(yè)驅(qū)動類
job.setJarByClass(ESToHdfs.class);
// 設(shè)置ES的輸入類型
job.setInputFormatClass(EsInputFormat.class);
job.setMapperClass(ESToHdfsMapper.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(Text.class);
FileOutputFormat.setOutputPath(job, new Path(out));
job.setNumReduceTasks(0);
return job.waitForCompletion(true) ? 0 : 1;
}
}
- 驗證
結(jié)果如下圖
3)、示例2:tomcat_log_2023-03
將ES中的tomcat_log_2023-03索引數(shù)據(jù)存儲至hdfs中,其中hdfs是HA。hdfs中是以txt形式存儲的,其中數(shù)據(jù)用逗號隔離
- 其數(shù)據(jù)結(jié)構(gòu)
key Uzm_44YBH2rQ2w9r5vqK , value {message=2023-03-15 13:30:00.001 [schedulerJobAllTask_Worker-1] INFO c.o.d.s.t.QuartzTask.executeAllTaskList-{37} - 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******, tags=[_dateparsefailure], class=c.o.d.s.t.QuartzTask.executeAllTaskList-{37}, level=INFO, date=2023-03-15 13:30:00.001, thread=schedulerJobAllTask_Worker-1, fields={source=catalina}, @timestamp=2023-03-15T05:30:06.812Z, log={file={path=/opt/apache-tomcat-9.0.43/logs/catalina.out}, offset=76165371}, info=- 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******}
- 實現(xiàn)
import java.io.IOException;
import java.util.Iterator;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsInputFormat;
import org.elasticsearch.hadoop.mr.LinkedMapWritable;
import lombok.Data;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class ESToHdfs2 extends Configured implements Tool {
private static String out = "/ES-Hadoop/tomcatlog";
static class ESToHdfs2Mapper extends Mapper<Text, LinkedMapWritable, NullWritable, Text> {
Text outValue = new Text();
protected void map(Text key, LinkedMapWritable value, Context context) throws IOException, InterruptedException {
// log.info("key {} , value {}", key.toString(), value);
TomcatLog tLog = new TomcatLog();
Iterator it = value.entrySet().iterator();
String name = null;
String data = null;
while (it.hasNext()) {
Map.Entry entry = (Map.Entry) it.next();
name = entry.getKey().toString();
data = entry.getValue().toString();
switch (name) {
case "date":
tLog.setDate(data.replace('/', '-'));
break;
case "thread":
tLog.setThread(data);
break;
case "level":
tLog.setLogLevel(data);
break;
case "class":
tLog.setClazz(data);
break;
case "info":
tLog.setLogMsg(data);
break;
}
}
outValue.set(tLog.toString());
context.write(NullWritable.get(), outValue);
}
}
@Data
static class TomcatLog {
private String date;
private String thread;
private String logLevel;
private String clazz;
private String logMsg;
public String toString() {
return new StringBuilder(date).append(",").append(thread).append(",").append(logLevel).append(",").append(clazz).append(",").append(logMsg).toString();
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
int status = ToolRunner.run(conf, new ESToHdfs2(), args);
System.exit(status);
}
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
conf.set("dfs.nameservices", "HadoopHAcluster");
conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
System.setProperty("HADOOP_USER_NAME", "alanchan");
conf.setBoolean("mapred.map.tasks.speculative.execution", false);
conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
// ElaticSearch 索引名稱
conf.set("es.resource", "tomcat_log_2023-03");
conf.set("es.query", "{\"query\":{\"bool\":{\"must\":[{\"match_all\":{}}],\"must_not\":[],\"should\":[]}},\"from\":0,\"size\":10,\"sort\":[],\"aggs\":{}}");
Job job = Job.getInstance(conf, ESToHdfs2.class.getName());
// 設(shè)置作業(yè)驅(qū)動類
job.setJarByClass(ESToHdfs2.class);
job.setInputFormatClass(EsInputFormat.class);
job.setMapperClass(ESToHdfs2Mapper.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(Text.class);
FileOutputFormat.setOutputPath(job, new Path(out));
job.setNumReduceTasks(0);
return job.waitForCompletion(true) ? 0 : 1;
}
}
- 驗證
2、json文件格式寫入
將ES中的tomcat_log_2023-03索引數(shù)據(jù)存儲至hdfs中,其中hdfs是HA。hdfs中是以json形式存儲的
- 其數(shù)據(jù)結(jié)構(gòu)
key Uzm_44YBH2rQ2w9r5vqK , value {message=2023-03-15 13:30:00.001 [schedulerJobAllTask_Worker-1] INFO c.o.d.s.t.QuartzTask.executeAllTaskList-{37} - 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******, tags=[_dateparsefailure], class=c.o.d.s.t.QuartzTask.executeAllTaskList-{37}, level=INFO, date=2023-03-15 13:30:00.001, thread=schedulerJobAllTask_Worker-1, fields={source=catalina}, @timestamp=2023-03-15T05:30:06.812Z, log={file={path=/opt/apache-tomcat-9.0.43/logs/catalina.out}, offset=76165371}, info=- 生成消息記錄任務(wù)停止執(zhí)行結(jié)束*******}
- 實現(xiàn)
import java.io.IOException;
import java.util.Iterator;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsInputFormat;
import org.elasticsearch.hadoop.mr.LinkedMapWritable;
import com.google.gson.Gson;
import lombok.Data;
public class ESToHdfsByJson extends Configured implements Tool {
private static String out = "/ES-Hadoop/tomcatlog_json";
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
int status = ToolRunner.run(conf, new ESToHdfsByJson(), args);
System.exit(status);
}
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
conf.set("dfs.nameservices", "HadoopHAcluster");
conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
System.setProperty("HADOOP_USER_NAME", "alanchan");
conf.setBoolean("mapred.map.tasks.speculative.execution", false);
conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
// ElaticSearch 索引名稱
conf.set("es.resource", "tomcat_log_2023-03");
conf.set("es.query", "{\"query\":{\"bool\":{\"must\":[{\"match_all\":{}}],\"must_not\":[],\"should\":[]}},\"from\":0,\"size\":10,\"sort\":[],\"aggs\":{}}");
Job job = Job.getInstance(conf, ESToHdfs2.class.getName());
// 設(shè)置作業(yè)驅(qū)動類
job.setJarByClass(ESToHdfsByJson.class);
job.setInputFormatClass(EsInputFormat.class);
job.setMapperClass(ESToHdfsByJsonMapper.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(Text.class);
FileOutputFormat.setOutputPath(job, new Path(out));
job.setNumReduceTasks(0);
return job.waitForCompletion(true) ? 0 : 1;
}
static class ESToHdfsByJsonMapper extends Mapper<Text, LinkedMapWritable, NullWritable, Text> {
Text outValue = new Text();
private Gson gson = new Gson();
protected void map(Text key, LinkedMapWritable value, Context context) throws IOException, InterruptedException {
// log.info("key {} , value {}", key.toString(), value);
TomcatLog tLog = new TomcatLog();
// tLog.setId(key.toString());
Iterator it = value.entrySet().iterator();
String name = null;
String data = null;
while (it.hasNext()) {
Map.Entry entry = (Map.Entry) it.next();
name = entry.getKey().toString();
data = entry.getValue().toString();
switch (name) {
case "date":
tLog.setDate(data.replace('/', '-'));
break;
case "thread":
tLog.setThread(data);
break;
case "level":
tLog.setLogLevel(data);
break;
case "class":
tLog.setClazz(data);
break;
case "info":
tLog.setLogMsg(data);
break;
}
}
outValue.set(gson.toJson(tLog));
context.write(NullWritable.get(), outValue);
}
}
@Data
static class TomcatLog {
// private String id ;
private String date;
private String thread;
private String logLevel;
private String clazz;
private String logMsg;
}
}
- 驗證
三、HDFS寫入ES
本示例以上述例子中存入hdfs的數(shù)據(jù)。經(jīng)過測試,ES只能將json的數(shù)據(jù)導入。
pom.xml參考上述例子
1、txt文件寫入
先將數(shù)據(jù)轉(zhuǎn)換成json后存入文章來源:http://www.zghlxwxcb.cn/news/detail-538510.html
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsOutputFormat;
import com.google.gson.Gson;
import lombok.Data;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class HdfsTxtDataToES extends Configured implements Tool {
private static String out = "/ES-Hadoop/tomcatlog";
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
int status = ToolRunner.run(conf, new HdfsTxtDataToES(), args);
System.exit(status);
}
@Data
static class TomcatLog {
private String date;
private String thread;
private String logLevel;
private String clazz;
private String logMsg;
}
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
conf.set("dfs.nameservices", "HadoopHAcluster");
conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
System.setProperty("HADOOP_USER_NAME", "alanchan");
conf.setBoolean("mapred.map.tasks.speculative.execution", false);
conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
// ElaticSearch 索引名稱,可以不提前創(chuàng)建
conf.set("es.resource", "tomcat_log_2024");
// Hadoop上的數(shù)據(jù)格式為JSON,可以直接導入
conf.set("es.input.json", "yes");
Job job = Job.getInstance(conf, HdfsTxtDataToES.class.getName());
// 設(shè)置作業(yè)驅(qū)動類
job.setJarByClass(HdfsTxtDataToES.class);
// 設(shè)置EsOutputFormat
job.setOutputFormatClass(EsOutputFormat.class);
job.setMapperClass(HdfsTxtDataToESMapper.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.setInputPaths(job, new Path(out));
job.setNumReduceTasks(0);
return job.waitForCompletion(true) ? 0 : 1;
}
static class HdfsTxtDataToESMapper extends Mapper<LongWritable, Text, NullWritable, Text> {
Text outValue = new Text();
TomcatLog tLog = new TomcatLog();
Gson gson = new Gson();
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
log.info("key={},value={}", key, value);
// date:2023-03-13 17:33:00.001,
// thread:schedulerJobAllTask_Worker-1,
// loglevel:INFO,
// clazz:o.q.c.QuartzScheduler.start-{461},
// logMsg:- Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
String[] lines = value.toString().split(",");
tLog.setDate(lines[0]);
tLog.setThread(lines[1]);
tLog.setLogLevel(lines[2]);
tLog.setClazz(lines[3]);
tLog.setLogMsg(lines[4]);
outValue.set(gson.toJson(tLog));
context.write(NullWritable.get(), outValue);
}
}
}
2、json文件寫入
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.elasticsearch.hadoop.mr.EsOutputFormat;
import lombok.extern.slf4j.Slf4j;
@Slf4j
public class HdfsJsonDataToES extends Configured implements Tool {
private static String out = "/ES-Hadoop/tomcatlog_json";
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
int status = ToolRunner.run(conf, new HdfsJsonDataToES(), args);
System.exit(status);
}
@Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
conf.set("fs.defaultFS", "hdfs://HadoopHAcluster");
conf.set("dfs.nameservices", "HadoopHAcluster");
conf.set("dfs.ha.namenodes.HadoopHAcluster", "nn1,nn2");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn1", "server1:8020");
conf.set("dfs.namenode.rpc-address.HadoopHAcluster.nn2", "server2:8020");
conf.set("dfs.client.failover.proxy.provider.HadoopHAcluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
System.setProperty("HADOOP_USER_NAME", "alanchan");
conf.setBoolean("mapred.map.tasks.speculative.execution", false);
conf.setBoolean("mapred.reduce.tasks.speculative.execution", false);
conf.set("es.nodes", "server1:9200,server2:9200,server3:9200");
// ElaticSearch 索引名稱,可以不提前創(chuàng)建
conf.set("es.resource", "tomcat_log_2023");
//Hadoop上的數(shù)據(jù)格式為JSON,可以直接導入
conf.set("es.input.json", "yes");
Job job = Job.getInstance(conf, HdfsJsonDataToES.class.getName());
// 設(shè)置作業(yè)驅(qū)動類
job.setJarByClass(HdfsJsonDataToES.class);
job.setOutputFormatClass(EsOutputFormat.class);
job.setMapperClass(HdfsJsonDataToESMapper.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.setInputPaths(job, new Path(out));
job.setNumReduceTasks(0);
return job.waitForCompletion(true) ? 0 : 1;
}
static class HdfsJsonDataToESMapper extends Mapper<LongWritable, Text, NullWritable, Text> {
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
log.info("key={},value={}",key,value);
context.write(NullWritable.get(), value);
}
}
}
以上,簡單的介紹了ES-hadoop組件功能使用,即通過ES-hadoop實現(xiàn)相互數(shù)據(jù)寫入示例。文章來源地址http://www.zghlxwxcb.cn/news/detail-538510.html
到了這里,關(guān)于9、Elasticsearch7.6.1 ES與HDFS相互轉(zhuǎn)存數(shù)據(jù)-ES-Hadoop的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!