GBase8a常用运维命令
切换集群模式:
1.修改集群模式为只读
gcadmin switchmode readonly
2.修改集群模式为正常
gcadmin switchmode normal
用户创建赋权:
1.创建用户
create user 用户名 identified by '密码';
2.给用户赋予数据库级权限,首先选择要赋权限的数据库
grant all on 数据库名.* to 用户名;
3.给用户赋予导出数据的权限(全局级权限)
grant file on *.* to 用户名;
4.显示该用户的权限
show grants for slp;
5.查看当前用户和用户的权限信息
show grants for current_user();
建表:
1.在自己的数据库中复制userinfo表为hash分布表hashinfo,hash分布列为no
create table hashinfo distributed by ('no') as select * from userinfo;
2.在自己的数据库中复制userinfo表为复制表replinfo
create table replinfo replicated as select * from userinfo;
3.在数据库中复制userinfo表的表结构和表类型到新表infocopy,不带数据
create table infocopy like userinfo;
查看数据库信息:
1.查看系统参数(压缩)相关值
show variables like '%compress%';
2.查看当前集群中test数据库中所有表信息
select * from information_schema.tables where TABLE_SCHEMA ='test';
3.查看当前集群中所有的视图信息
select * from information_schema.tables where TABLE_TYPE like 'VIEW';
4.查看加载进度
select TB_NAME,IP,ELAPSED_TIME,AVG_SPEED,PROGRESS,SKIPPED_RECORDS from information_schema.load_status order by tb_name;
5.查看表大小
select table_schema,table_name,table_data_size,table_storage_size from information_schema.cluster_tables where table_schema='test' and table_name='test';
6.查看列属性
select column_name,table_name,column_type from information_schema.columns;
7.查看hash分布键
select dbName,tbName,isReplicate,hash_column from gbase.table_distribution where dbName = 'usr_sod' and hash_column is null;
导入导出:
1.加载分隔符文本
load data infile 'ftp://用户:密码@10.33.37.130//home/data/bms.dat' into table test.bms data_format 3 character set gbk null_value 'nullnull' fields terminated by '|' datetime format '%Y-%m-%d日 %H:%i:%s' lines terminated by '/n' autofill trace 1;
2.加载定长文本
LOAD DATA INFILE 'http://127.0.0.1/data/b.tbl' INTO TABLE test.b DATA_FORMAT 4 FIELDS DEFINER '4,6,10,12,1,19' TABLE_FIELDS 'no,pwd,birth date "%Y年%m月%d日",filler,sex,loginTime date "%Y-%m-%d %T"';
3.导出到本地
rmt:select * from nation into outfile '/home/gbase/new.txt' FIELDS TERMINATED BY '|' ENCLOSED BY '"' lines terminated by '@@@@@';
4.加载高可用的HDFS
namenode存在主备的需要设置以下参数 填写namenode的IP
set gbase_hdfs_namenodes="192.168.10.1,192.168.10.2";
加载语句 用户@主namenodeIP:50070/数据文件在hdfs的绝对路径 , 库.表名
load data infile 'hdp://root@192.168.10.1:50070/why/test6/test.txt' into table test.test1 ;
5.hdfs导出
把Hadoop的主机名添加到 /etc/hosts
select * from test.test1 into outfile 'hdp://root@10.10.3.117:50070/why/test1' outfilemode by hdfs fields terminated by '|' ENCLOSED by '"' ;
空间清理:
DML后释放表空间
alter table 表名 shrink space full;
热门帖子
- 12025-12-01浏览数:181961
- 22023-05-09浏览数:24011
- 42023-09-25浏览数:17263
- 52020-05-11浏览数:16398