2023年6月21日发(作者:)
第⼆章数据迁移之Dolphinscheduler调度DataX从Mysql全量导⼊Hive绪、需求说明将源系统mysql表数据全量抽取到hive中作为ODS层,不保留历史变化create table T_YYBZB_TGH_BANKINFO( id int(8), bank_id int(8), bank_name varchar(200));insert into T_YYBZB_TGH_BANKINFO (ID, BANK_ID, BANK_NAME)values (11, 11, '⼯商银⾏(⼴州)');1、创建⽬标表create table _t_yybzb_tgh_bankinfo_di( id int, bank_id int, bank_name string);2、编写相关脚本2.1、编写数据库配置⽂件db_confmysql_username=rootmysql_password=123456mysql_ip=192.168.6.102mysql_port=3306mysql_sid=sourcehadoop102_ip=192.168.6.102hadoop102_port=80202.2、编写数据迁移JSON⽂件{ "job": { "content": [ { "reader": { "name": "mysqlreader", "parameter": { "connection": [ { "jdbcUrl": [ "jdbc:mysql://$ip:$port/$sid"], "querySql": ["select id,bank_id,bank_name from T_YYBZB_TGH_BANKINFO"], } ], "password": "$password", "username": "$username" } }, "writer": { "name": "hdfswriter", "parameter": { "defaultFS": "hdfs://$hdfs_ip:$hdfs_port", "fileType": "text", "path": "/user/hive/warehouse//ods_t_yybzb_tgh_bankinfo_di", "fileName": "ods_t_yybzb_tgh_bankinfo_di", "column": [ {"name":"id","type":"int"}, {"name":"bank_id","type":"int"}, {"name":"bank_name","type":"string"} ], "writeMode": "append", "fieldDelimiter": "t", "encoding": "utf-8" } } }], "setting": { "speed": { "channel": "1" } } }}2.3、编写shell脚本传参#!/bin/bash#配置⽂件路径:config_file=../db_conf/#源系统名称src_system=mysql#⽬标系统名称tag_system=hadoop102export in_username=`grep -w ${src_system}_username ${config_file} |awk -F '=' '{print $2}'`export in_password=`grep -w ${src_system}_password ${config_file} |awk -F '=' '{print $2}'`export in_ip=`grep -w ${src_system}_ip ${config_file} |awk -F '=' '{print $2}'`export in_port=`grep -w ${src_system}_port ${config_file} |awk -F '=' '{print $2}'`export in_sid=`grep -w ${src_system}_sid ${config_file} |awk -F '=' '{print $2}'`export in_hdfs_ip=`grep -w ${tag_system}_ip ${config_file} |awk -F '=' '{print $2}'`export in_hdfs_port=`grep -w ${tag_system}_port ${config_file} |awk -F '=' '{print $2}'`pre_day=`date -d -1day +%Y%m%d`pre_day_mon=`date -d -1day +%Y%m`echo ${in_username}echo ${in_password}echo ${in_ip}echo ${in_port}echo ${in_sid}echo ${in_hdfs_ip}echo ${in_hdfs_port}echo ${pre_day}echo ${pre_day_mon}# 全量导⼊:hive -e "truncate table test_;"# nsrun_thon ../../ -p"-Dusername=$in_username -Dpassword=$in_password -Dip=$in_ip -Dport=$in_port -Dsid=$in_sid -Dhdfs_ip=$in_hdfs_ip -Dhdfs_port=$in_hdfs_port" ../json_conf/bank_3、dolphinscheduler配置3.1、资源中⼼配置(1)创建主⽂件夹datax_text(2)创建三个⼦⽂件夹sh_startjson_confdb_conf(3)将脚本分被上传⾄对应⽂件夹中① 上传⾄sh_start② bank_上传⾄json_conf③上传⾄db_conf⽂件夹3.2、项⽬创建根据业务主题进⾏划分3.3、⼯作流创建3.3.1、shell脚本调度datax(此路未通)(1)创建⼯作流⼯作流定义-创建⼯作流(2)配置sql+shell控件节点配置数据源配置sql控件(清空表)truncate table _t_yybzb_tgh_bankinfo_di配置shell控件(导⼊数据)(3)保存shell节点作为⼯作流(4)调度运⾏第⼀步:上线部署第⼆步:⼿动运⾏(5)查看运⾏状态通过⼯作流进⼊画布查看⼦任务运⾏状态(⾮常推荐)通过⼯作流实例查看运⾏状态通过⽢特图查看运⾏状态3.3.2、DataX组件调度(1)JSON脚本编写{ "job": { "content": [ { "reader": { "name": "mysqlreader", "parameter": { "connection": [ { "jdbcUrl": [ "jdbc:mysql://${ip}:${port}/${sid}?useSSL=false"], "querySql": ["select id,bank_id,bank_name from T_YYBZB_TGH_BANKINFO"], } ], "password": "${password}", "username": "${username}" } }, "writer": { "name": "hdfswriter", "parameter": { "defaultFS": "hdfs://${hdfs_ip}:${hdfs_port}", "fileType": "text", "path": "/user/hive/warehouse//ods_t_yybzb_tgh_bankinfo_di", "fileName": "ods_t_yybzb_tgh_bankinfo_di", "column": [ {"name":"id","type":"int"}, {"name":"bank_id","type":"int"}, {"name":"bank_name","type":"string"} ], "writeMode": "append", "fieldDelimiter": "t", "encoding": "utf-8" } } }], "setting": { "speed": { "channel": "1" } } }}(2)组件部署SQL组件清空表truncate table _t_yybzb_tgh_bankinfo_diDataX组件全量导⼊数据(3)参数传⼊(4)上线运⾏第⼀步:上线部署第⼆步:⼿动运⾏(5)查看运⾏结果SQL组件⽇志[INFO] 2021-12-07 16:04:20.697 - [taskAppId=TASK-10-23-85]:[115] - create dir success /tmp/dolphinscheduler/exec/process/4/10/23/85[INFO] 2021-12-07 16:04:20.750 - [taskAppId=TASK-10-23-85]:[112] - sql task params {"postStatements":[],"connParams":"","receiversCc":"","udfs":"","type":"HIVE","title":"","sql":"truncate table _t_yybzb_tgh_bankinfo_di","preStatements":[],"sqlType":"1","sendEmail":false,"receivers":"","datasource":7,"displayRows":10,"limit":10000,"showType":"TABLE","localParams":[]}[INFO] 2021-12-07 16:04:20.755 - [taskAppId=TASK-10-23-85]:[128] - Full sql parameters: SqlParameters{type='HIVE', datasource=7, sql='truncate table
_t_yybzb_tgh_bankinfo_di', sqlType=1, sendEmail=false, displayRows=10, limit=10000, udfs='', showType='TABLE', connParams='', title='', receivers='', receiversCc='', preStatements=[], postStatements=[]}[INFO] 2021-12-07 16:04:20.755 - [taskAppId=TASK-10-23-85]:[129] - sql type : HIVE, datasource : 7, sql : truncate table _t_yybzb_tgh_bankinfo_di , localParams : [],udfs : ,showType : TABLE,connParams : , query max result limit : 10000[INFO] 2021-12-07 16:04:20.765 - [taskAppId=TASK-10-23-85]:[549] - after replace sql , preparing : truncate table _t_yybzb_tgh_bankinfo_di[INFO] 2021-12-07 16:04:20.765 - [taskAppId=TASK-10-23-85]:[558] - Sql Params are replaced sql , parameters:[INFO] 2021-12-07 16:04:20.767 - [taskAppId=TASK-10-23-85]:[52] - can't find udf function resource[INFO] 2021-12-07 16:04:20.974 - [taskAppId=TASK-10-23-85]:[458] - prepare statement replace sql : eparedStatement@43ecab0e
DataX组件⽇志 2021-12-07 16:04:36.123 [job-0] INFO JobContainer - PerfTrace not enable! 2021-12-07 16:04:36.123 [job-0] INFO StandAloneJobContainerCommunicator - Total 15 records, 134 bytes | Speed 13B/s, 1 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00% 2021-12-07 16:04:36.124 [job-0] INFO JobContainer -
任务启动时刻 : 2021-12-07 16:04:25 任务结束时刻 : 2021-12-07 16:04:36 任务总计耗时 : 11s 任务平均流量 : 13B/s 记录写⼊速度 : 1rec/s 读出记录总数 : 15 读写失败总数 : 03.4、⼯作流调试(1)查看运⾏⽇志任务实例-操作查看运⾏⽇志具体⽇志如下(2)任务下线⼯作流定义-》操作-》下线(3)调整任务编辑节点-》调整的启动⽬录根据个⼈情况调整:我的datax启动⽬录(/opt/module/datax/bin)(3)重新上线并运⾏5、常见问题5.1、⽆可⽤master节点(1)问题如下在作业流调试运⾏报错:⽆可⽤master节点(2)问题原因masterserver服务挂调(3)重启服务././5.2、Hive数据源配置(1)hive的多⽤户配置hive本⾝不创建⽤户,⽤户就是linux的⽤户,(2)打开hive的server服务第⼀步:运⾏hive下bin/hiveserver2脚本./hiveserver2
第⼆步:新建xshell窗⼝beeline# 2、连接hive2!connect jdbc:hive2://192.168.6.102:10000# 3、输⼊⽤户名和密码Enter username for jdbc:hive2://192.168.6.102:10000: atguiguEnter password for jdbc:hive2://192.168.6.102:10000: ******(3)海豚调度器创建数据源5.3、hive数据源连接失败(1)问题现象(2)问题原因# 1、检查hiveserver是否启动netstat -anp |grep 10000(3)解决⽅式启动hiveserver25.24、作业流提交后没有任务实例查看海豚⽇志 current cpu load average 0.35 is too high or available memory 0.14G is too low, under =16.0 and =0.3G[WARN] 2021-12-06 17:17:45.031 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:45.032 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:50.032 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:50.032 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:51.317 eatTask:[80] - current cpu load average 0.3 is too high or available
memory 0.14G is too low, under =16.0 and =0.3G解决⽅式花个⼀亿提升电脑配置5.5、datax⽂件路径出错(1)问题现象⽇志报错没有这样的⽂件路径,but集群上的datax启动⽬录是这样的(2)问题原因海豚调度器的datax路径配置错误,找不到该⽂件dataX在Hadoop02上部署,没有在103,104节点部署(3)解决⽅案将datax⽂件scp到集群其他节点5.6、/bin/sh: java: 未找到命令(1)问题现象(2)问题原因未找到,待解决,吐槽⽇志…(3)⽅案结果放弃shell调度datax的⽅式5.7、[Errno 2] No such file or directory(1)问题现象(2)问题原因未配置dolphinscheduler_⽂件(3)解决⽅案dolphinscheduler_添加如下内容#JAVA_HOMEexport JAVA_HOME=/opt/module/jdk1.8.0_212export PATH=$PATH:$JAVA_HOME/bin#HADOOP_HOMEexport HADOOP_HOME=/opt/module/hadoop-3.1.3export PATH=$PATH:$HADOOP_HOME/binexport PATH=$PATH:$HADOOP_HOME/sbin#HIVE_HOMEexport HIVE_HOME=/opt/module/hiveexport PATH=$PATH:$HIVE_HOME/bin#KAFKA_HOMEexport KAFKA_HOME=/opt/module/kafkaexport PATH=$PATH:$KAFKA_HOME/bin#HBASE_HOMEexport HBASE_HOME=/opt/module/hbaseexport PATH=$PATH:$HBASE_HOME/bin#FLINK_HOMEexport FLINK_HOME=/opt/module/flink-1.10.1export PATH=$PATH:$FLINK_HOME/bin#SPARK_HOMEexport SPARK_HOME=/opt/module/spark-3.1.2export PATH=$PATH:$SPARK_HOME/binexport PATN=$PATH:$SPARK_HOME/sbin#DATAX_HOMEexport DATAX_HOME=/opt/module/dataxexport PATH=$PATH:$DATAX_HOME/bin之后分发到其他节点xsync dolphinscheduler_5.8、 ControlException: Permission denied:test123(1)问题现状⽆法写⼊⽂件(2)问题原因调度平台使⽤的test123租户进⾏JOB调度,⼆hdfs的hive表权限是atguigu⽤户,所以权限不⾜,导致⽆法写⼊⽂件(3)解决⽅案在安全中⼼-租户管理菜单中配置atguigu⽤户保存作业流时选择atguigu⽤户5.9、连接mysql失败(1)问题现状ion: DataX⽆法连接对应的数据库,可能原因是:1) 配置的ip/port/database/jdbc错误,⽆法连接。2) 配置的username/password错误,鉴权失败。请和DBA确认该数据库的连接信息是否正确。(2)问题原因JDBC参数配置出错(3)解决⽅案在jdbc的连接配置中加上useSSL=false"jdbcUrl": ["jdbc:mysql://$ip:$port/$sid?useSSL=false"],
2023年6月21日发(作者:)
第⼆章数据迁移之Dolphinscheduler调度DataX从Mysql全量导⼊Hive绪、需求说明将源系统mysql表数据全量抽取到hive中作为ODS层,不保留历史变化create table T_YYBZB_TGH_BANKINFO( id int(8), bank_id int(8), bank_name varchar(200));insert into T_YYBZB_TGH_BANKINFO (ID, BANK_ID, BANK_NAME)values (11, 11, '⼯商银⾏(⼴州)');1、创建⽬标表create table _t_yybzb_tgh_bankinfo_di( id int, bank_id int, bank_name string);2、编写相关脚本2.1、编写数据库配置⽂件db_confmysql_username=rootmysql_password=123456mysql_ip=192.168.6.102mysql_port=3306mysql_sid=sourcehadoop102_ip=192.168.6.102hadoop102_port=80202.2、编写数据迁移JSON⽂件{ "job": { "content": [ { "reader": { "name": "mysqlreader", "parameter": { "connection": [ { "jdbcUrl": [ "jdbc:mysql://$ip:$port/$sid"], "querySql": ["select id,bank_id,bank_name from T_YYBZB_TGH_BANKINFO"], } ], "password": "$password", "username": "$username" } }, "writer": { "name": "hdfswriter", "parameter": { "defaultFS": "hdfs://$hdfs_ip:$hdfs_port", "fileType": "text", "path": "/user/hive/warehouse//ods_t_yybzb_tgh_bankinfo_di", "fileName": "ods_t_yybzb_tgh_bankinfo_di", "column": [ {"name":"id","type":"int"}, {"name":"bank_id","type":"int"}, {"name":"bank_name","type":"string"} ], "writeMode": "append", "fieldDelimiter": "t", "encoding": "utf-8" } } }], "setting": { "speed": { "channel": "1" } } }}2.3、编写shell脚本传参#!/bin/bash#配置⽂件路径:config_file=../db_conf/#源系统名称src_system=mysql#⽬标系统名称tag_system=hadoop102export in_username=`grep -w ${src_system}_username ${config_file} |awk -F '=' '{print $2}'`export in_password=`grep -w ${src_system}_password ${config_file} |awk -F '=' '{print $2}'`export in_ip=`grep -w ${src_system}_ip ${config_file} |awk -F '=' '{print $2}'`export in_port=`grep -w ${src_system}_port ${config_file} |awk -F '=' '{print $2}'`export in_sid=`grep -w ${src_system}_sid ${config_file} |awk -F '=' '{print $2}'`export in_hdfs_ip=`grep -w ${tag_system}_ip ${config_file} |awk -F '=' '{print $2}'`export in_hdfs_port=`grep -w ${tag_system}_port ${config_file} |awk -F '=' '{print $2}'`pre_day=`date -d -1day +%Y%m%d`pre_day_mon=`date -d -1day +%Y%m`echo ${in_username}echo ${in_password}echo ${in_ip}echo ${in_port}echo ${in_sid}echo ${in_hdfs_ip}echo ${in_hdfs_port}echo ${pre_day}echo ${pre_day_mon}# 全量导⼊:hive -e "truncate table test_;"# nsrun_thon ../../ -p"-Dusername=$in_username -Dpassword=$in_password -Dip=$in_ip -Dport=$in_port -Dsid=$in_sid -Dhdfs_ip=$in_hdfs_ip -Dhdfs_port=$in_hdfs_port" ../json_conf/bank_3、dolphinscheduler配置3.1、资源中⼼配置(1)创建主⽂件夹datax_text(2)创建三个⼦⽂件夹sh_startjson_confdb_conf(3)将脚本分被上传⾄对应⽂件夹中① 上传⾄sh_start② bank_上传⾄json_conf③上传⾄db_conf⽂件夹3.2、项⽬创建根据业务主题进⾏划分3.3、⼯作流创建3.3.1、shell脚本调度datax(此路未通)(1)创建⼯作流⼯作流定义-创建⼯作流(2)配置sql+shell控件节点配置数据源配置sql控件(清空表)truncate table _t_yybzb_tgh_bankinfo_di配置shell控件(导⼊数据)(3)保存shell节点作为⼯作流(4)调度运⾏第⼀步:上线部署第⼆步:⼿动运⾏(5)查看运⾏状态通过⼯作流进⼊画布查看⼦任务运⾏状态(⾮常推荐)通过⼯作流实例查看运⾏状态通过⽢特图查看运⾏状态3.3.2、DataX组件调度(1)JSON脚本编写{ "job": { "content": [ { "reader": { "name": "mysqlreader", "parameter": { "connection": [ { "jdbcUrl": [ "jdbc:mysql://${ip}:${port}/${sid}?useSSL=false"], "querySql": ["select id,bank_id,bank_name from T_YYBZB_TGH_BANKINFO"], } ], "password": "${password}", "username": "${username}" } }, "writer": { "name": "hdfswriter", "parameter": { "defaultFS": "hdfs://${hdfs_ip}:${hdfs_port}", "fileType": "text", "path": "/user/hive/warehouse//ods_t_yybzb_tgh_bankinfo_di", "fileName": "ods_t_yybzb_tgh_bankinfo_di", "column": [ {"name":"id","type":"int"}, {"name":"bank_id","type":"int"}, {"name":"bank_name","type":"string"} ], "writeMode": "append", "fieldDelimiter": "t", "encoding": "utf-8" } } }], "setting": { "speed": { "channel": "1" } } }}(2)组件部署SQL组件清空表truncate table _t_yybzb_tgh_bankinfo_diDataX组件全量导⼊数据(3)参数传⼊(4)上线运⾏第⼀步:上线部署第⼆步:⼿动运⾏(5)查看运⾏结果SQL组件⽇志[INFO] 2021-12-07 16:04:20.697 - [taskAppId=TASK-10-23-85]:[115] - create dir success /tmp/dolphinscheduler/exec/process/4/10/23/85[INFO] 2021-12-07 16:04:20.750 - [taskAppId=TASK-10-23-85]:[112] - sql task params {"postStatements":[],"connParams":"","receiversCc":"","udfs":"","type":"HIVE","title":"","sql":"truncate table _t_yybzb_tgh_bankinfo_di","preStatements":[],"sqlType":"1","sendEmail":false,"receivers":"","datasource":7,"displayRows":10,"limit":10000,"showType":"TABLE","localParams":[]}[INFO] 2021-12-07 16:04:20.755 - [taskAppId=TASK-10-23-85]:[128] - Full sql parameters: SqlParameters{type='HIVE', datasource=7, sql='truncate table
_t_yybzb_tgh_bankinfo_di', sqlType=1, sendEmail=false, displayRows=10, limit=10000, udfs='', showType='TABLE', connParams='', title='', receivers='', receiversCc='', preStatements=[], postStatements=[]}[INFO] 2021-12-07 16:04:20.755 - [taskAppId=TASK-10-23-85]:[129] - sql type : HIVE, datasource : 7, sql : truncate table _t_yybzb_tgh_bankinfo_di , localParams : [],udfs : ,showType : TABLE,connParams : , query max result limit : 10000[INFO] 2021-12-07 16:04:20.765 - [taskAppId=TASK-10-23-85]:[549] - after replace sql , preparing : truncate table _t_yybzb_tgh_bankinfo_di[INFO] 2021-12-07 16:04:20.765 - [taskAppId=TASK-10-23-85]:[558] - Sql Params are replaced sql , parameters:[INFO] 2021-12-07 16:04:20.767 - [taskAppId=TASK-10-23-85]:[52] - can't find udf function resource[INFO] 2021-12-07 16:04:20.974 - [taskAppId=TASK-10-23-85]:[458] - prepare statement replace sql : eparedStatement@43ecab0e
DataX组件⽇志 2021-12-07 16:04:36.123 [job-0] INFO JobContainer - PerfTrace not enable! 2021-12-07 16:04:36.123 [job-0] INFO StandAloneJobContainerCommunicator - Total 15 records, 134 bytes | Speed 13B/s, 1 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00% 2021-12-07 16:04:36.124 [job-0] INFO JobContainer -
任务启动时刻 : 2021-12-07 16:04:25 任务结束时刻 : 2021-12-07 16:04:36 任务总计耗时 : 11s 任务平均流量 : 13B/s 记录写⼊速度 : 1rec/s 读出记录总数 : 15 读写失败总数 : 03.4、⼯作流调试(1)查看运⾏⽇志任务实例-操作查看运⾏⽇志具体⽇志如下(2)任务下线⼯作流定义-》操作-》下线(3)调整任务编辑节点-》调整的启动⽬录根据个⼈情况调整:我的datax启动⽬录(/opt/module/datax/bin)(3)重新上线并运⾏5、常见问题5.1、⽆可⽤master节点(1)问题如下在作业流调试运⾏报错:⽆可⽤master节点(2)问题原因masterserver服务挂调(3)重启服务././5.2、Hive数据源配置(1)hive的多⽤户配置hive本⾝不创建⽤户,⽤户就是linux的⽤户,(2)打开hive的server服务第⼀步:运⾏hive下bin/hiveserver2脚本./hiveserver2
第⼆步:新建xshell窗⼝beeline# 2、连接hive2!connect jdbc:hive2://192.168.6.102:10000# 3、输⼊⽤户名和密码Enter username for jdbc:hive2://192.168.6.102:10000: atguiguEnter password for jdbc:hive2://192.168.6.102:10000: ******(3)海豚调度器创建数据源5.3、hive数据源连接失败(1)问题现象(2)问题原因# 1、检查hiveserver是否启动netstat -anp |grep 10000(3)解决⽅式启动hiveserver25.24、作业流提交后没有任务实例查看海豚⽇志 current cpu load average 0.35 is too high or available memory 0.14G is too low, under =16.0 and =0.3G[WARN] 2021-12-06 17:17:45.031 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:45.032 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:50.032 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:50.032 eightHostManager:[159] - worker 192.168.6.102:1234 current cpu load average 0.42 is too high or available memory 0.14G is too low[WARN] 2021-12-06 17:17:51.317 eatTask:[80] - current cpu load average 0.3 is too high or available
memory 0.14G is too low, under =16.0 and =0.3G解决⽅式花个⼀亿提升电脑配置5.5、datax⽂件路径出错(1)问题现象⽇志报错没有这样的⽂件路径,but集群上的datax启动⽬录是这样的(2)问题原因海豚调度器的datax路径配置错误,找不到该⽂件dataX在Hadoop02上部署,没有在103,104节点部署(3)解决⽅案将datax⽂件scp到集群其他节点5.6、/bin/sh: java: 未找到命令(1)问题现象(2)问题原因未找到,待解决,吐槽⽇志…(3)⽅案结果放弃shell调度datax的⽅式5.7、[Errno 2] No such file or directory(1)问题现象(2)问题原因未配置dolphinscheduler_⽂件(3)解决⽅案dolphinscheduler_添加如下内容#JAVA_HOMEexport JAVA_HOME=/opt/module/jdk1.8.0_212export PATH=$PATH:$JAVA_HOME/bin#HADOOP_HOMEexport HADOOP_HOME=/opt/module/hadoop-3.1.3export PATH=$PATH:$HADOOP_HOME/binexport PATH=$PATH:$HADOOP_HOME/sbin#HIVE_HOMEexport HIVE_HOME=/opt/module/hiveexport PATH=$PATH:$HIVE_HOME/bin#KAFKA_HOMEexport KAFKA_HOME=/opt/module/kafkaexport PATH=$PATH:$KAFKA_HOME/bin#HBASE_HOMEexport HBASE_HOME=/opt/module/hbaseexport PATH=$PATH:$HBASE_HOME/bin#FLINK_HOMEexport FLINK_HOME=/opt/module/flink-1.10.1export PATH=$PATH:$FLINK_HOME/bin#SPARK_HOMEexport SPARK_HOME=/opt/module/spark-3.1.2export PATH=$PATH:$SPARK_HOME/binexport PATN=$PATH:$SPARK_HOME/sbin#DATAX_HOMEexport DATAX_HOME=/opt/module/dataxexport PATH=$PATH:$DATAX_HOME/bin之后分发到其他节点xsync dolphinscheduler_5.8、 ControlException: Permission denied:test123(1)问题现状⽆法写⼊⽂件(2)问题原因调度平台使⽤的test123租户进⾏JOB调度,⼆hdfs的hive表权限是atguigu⽤户,所以权限不⾜,导致⽆法写⼊⽂件(3)解决⽅案在安全中⼼-租户管理菜单中配置atguigu⽤户保存作业流时选择atguigu⽤户5.9、连接mysql失败(1)问题现状ion: DataX⽆法连接对应的数据库,可能原因是:1) 配置的ip/port/database/jdbc错误,⽆法连接。2) 配置的username/password错误,鉴权失败。请和DBA确认该数据库的连接信息是否正确。(2)问题原因JDBC参数配置出错(3)解决⽅案在jdbc的连接配置中加上useSSL=false"jdbcUrl": ["jdbc:mysql://$ip:$port/$sid?useSSL=false"],
发布评论