CentOS 7.9 安装 GBase 8a MPP Cluster V9 9.5.3.28 数据库步骤

安装环境准备
节点 | 角色 | 操作系统 | 地址 | 配置 | GBASE版本 |
---|---|---|---|---|---|
gbase01.gbase.cn | GCWARE,COOR,DATA | CentOS 7.9 | 192.168.20.142 | 2C4G | GBase 8a MPP Cluster V9 9.5.3.28.12 |
gbase02.gbase.cn | GCWARE,COOR,DATA | CentOS 7.9 | 192.168.20.143 | 2C4G | GBase 8a MPP Cluster V9 9.5.3.28.12 |
gbase03.gbase.cn | GCWARE,COOR,DATA | CentOS 7.9 | 192.168.20.144 | 2C4G | GBase 8a MPP Cluster V9 9.5.3.28.12 |
VMware下载:https://support.broadcom.com/group/ecx/productdownloads?subfamily=VMware%20Workstation%20Pro
VirtualBox下载:https://www.virtualbox.org/wiki/Downloads
OS 建议下载 CentOS 7.9 安装包。推荐网址:https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/?spm=a2c6h.25603864.0.0.5cb3f5adC5fVSC
GBase 8a MPP Cluster V9 9.5.3.28.12下载地址:https://www.gbase.cn/download/gbase-8a?category=INSTALL_PACKAGE
gbase01.gbase.cn 为安装和管理主节点
安装系统时建议在“软件选择”中勾选“带GUI的服务器”中的“开发工具”选项。
各节点IP是同一网段,并互相能连通;开启 SSH 服务;关闭防火墙、关闭seLinux服务、开启时钟同步。
查看版本
# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
设置静态IP
我的CentOS 7 容易自动变换IP,所以需要设置成为静态IP。
修改网卡:/etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="a560d680-6c0a-44eb-b079-6d6677cdc418"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.20.129"
NETMASK="255.255.255.0"
GATEWAY="192.168.20.2"
DNS1="101.226.4.6"
上面是我以gbase01为例做的修改,其他IPADDR=X.X.X.X按需修改,这些内容为需要修改的,主要就是该静态模式、IP、验码、网关、DNS。
BOOTPROTO="static"
IPADDR="192.168.20.142"
NETMASK="255.255.255.0"
GATEWAY="192.168.20.2"
DNS1="101.226.4.6"
需要注意网关和DNS要写对。网关获取:ip route | grep default | awk '{print $3}'
DNS这里使用的是运营商的。
修改完成后,重启网卡
systemctl restart network
设置主机名
使用root用户设置,或者有sudo权限的用户设置,我这里为了方便使用root设置。
在每一个节点分别设置。注意,是分别设置。
hostnamectl set-hostname gbase01.gbase.cn # gbase01
hostnamectl set-hostname gbase02.gbase.cn # gbase02
hostnamectl set-hostname gbase03.gbase.cn # gbase03
修改hosts
在每一个节点执行修改文件/etc/hosts为下面内容
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# gbase 8a mpp cluster
192.168.20.142 gbase01.gbase.cn gbase01
192.168.20.143 gbase02.gbase.cn gbase02
192.168.20.144 gbase03.gbase.cn gbase03
配置 yum 源
在每一个节点执行
1、若可以联通互联网,可以使用远程源,方便直接下载使用
因为CentOS 7 的服务器已过,官方的yum不可用,所以需要配置第三方源,这里配置的是阿里云源
mkdir /etc/yum.repos.d/yum_bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/yum_bak
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache
2、若是内网环境可直接挂载ISO镜像文件下载
mkdir /mnt/iso
mount -o loop /dev/sr0 /mnt/iso
# 若是没有自动挂载ISO介质,则需要上传ISO文件再挂载。
mount -o loop /root/CentOS-7.9-x86_64-Everything-2009.iso /mnt/iso
制作repo文件
mkdir /etc/yum.repos.d/yum_bak
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/yum_bak
# 创建 local-iso.repo
cat << EOF >> /etc/yum.repos.d/local-iso.repo
[local-iso]
name=Local ISO Repository
baseurl=file:///mnt/iso
enabled=1
gpgcheck=0
EOF
重新制作yum源缓存
yum clean all
yum makecache
配置免密
使用脚本配置互相免密,在安装节点运行脚本就可以了。
vim agent.py
python agent.py
输入地址后,再输入密码
脚本 agent.py 内容如下:
#!/usr/bin/env python
import os
import re
import sys
import pty
import subprocess
import getpass
from threading import Thread
from optparse import OptionParser
SSH_CFG = 'StrictHostKeyChecking=no\nNoHostAuthenticationForLocalhost=yes\n'
TMP_SSH_DIR = '/tmp/.ssh'
SSH_CFG_FILE = TMP_SSH_DIR + '/config'
PRIVATE_KEY = TMP_SSH_DIR + '/id_rsa'
PUB_KEY = TMP_SSH_DIR + '/id_rsa.pub'
AUTH_KEY = TMP_SSH_DIR + '/authorized_keys'
class SSHRemote(object):
def __init__(self, host, user='', pwd=''):
self.host = host
self.user = user
self.rc = 0
self.pwd = pwd
def _execute(self, cmd):
try:
master, slave = pty.openpty()
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
self.stdout, self.stderr = p.communicate()
if p.returncode:
self.rc = p.returncode
print self.stdout
except Exception as e:
err('Failed to run commands on remote host: %s' % e)
def copy(self, local_folder, remote_folder='.'):
""" copy file to user's home folder """
if not os.path.exists(local_folder):
err('Copy file error: %s doesn\'t exist' % local_folder)
cmd = []
if self.pwd: cmd = ['sshpass', '-p', self.pwd]
cmd += ['scp', '-oStrictHostKeyChecking=no', '-r']
cmd += [local_folder]
if self.user:
cmd += ['%s@%s:%s/' % (self.user, self.host, remote_folder)]
else:
cmd += ['%s:%s/' % (self.host, remote_folder)]
self._execute(cmd)
if self.rc != 0: err('Failed to copy files to host [%s] using ssh, check your password' % self.host)
def ok(msg):
print '\n\33[32m***[OK]: %s \33[0m' % msg
def info(msg):
print '\n\33[33m***[INFO]: %s \33[0m' % msg
def err(msg):
sys.stderr.write('\n\33[31m***[ERROR]: %s \33[0m\n' % msg)
sys.exit(1)
def run_cmd(cmd):
""" check command return value and return stdout """
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = p.communicate()
if p.returncode != 0:
msg = stderr if stderr else stdout
err('Failed to run command %s: %s' % (cmd, msg))
return stdout.strip()
def gen_key_file():
run_cmd('mkdir -p %s' % TMP_SSH_DIR)
run_cmd('echo -e "y" | ssh-keygen -t rsa -N "" -f %s' % PRIVATE_KEY)
run_cmd('cp -f %s %s' % (PUB_KEY, AUTH_KEY))
with open(SSH_CFG_FILE, 'w') as f:
f.write(SSH_CFG)
run_cmd('chmod 600 %s %s; chmod 700 %s' % (SSH_CFG_FILE, AUTH_KEY, TMP_SSH_DIR))
def del_key_file():
run_cmd('rm -rf %s' % TMP_SSH_DIR)
def get_nodes():
node_list = ''
try:
node_lists = raw_input('Enter node list to set passwordless SSH(separated by comma): ')
if not node_lists: err('Empty value')
node_list = ' '.join(expNumRe(node_lists))
except KeyboardInterrupt:
info('Aborted ...')
return node_list.split()
def expNumRe(text):
explist = []
for regex in text.split(','):
regex = regex.strip()
r = re.match(r'(.*)\[(\d+)-(\d+)\](.*)', regex)
if r:
h = r.group(1)
d1 = r.group(2)
d2 = r.group(3)
t = r.group(4)
convert = lambda d: str(('%0' + str(min(len(d1), len(d2))) + 'd') % d)
if d1 > d2: d1, d2 = d2, d1
explist.extend([h + convert(c) + t for c in range(int(d1), int(d2)+1)])
else:
# keep original value if not matched
explist.append(regex)
return explist
def get_options():
usage = 'usage: %prog [options]\n'
usage += ' This tool is used to set up passwordless SSH on specific nodes for specific user.'
parser = OptionParser(usage=usage)
parser.add_option("-u", "--user", dest="user", metavar="USER",
help="User name to set up passwordless SSH for.")
(options, args) = parser.parse_args()
return options
def main():
del_key_file()
options = get_options()
if options.user:
user = options.user
else:
user = getpass.getuser()
gen_key_file()
nodes = get_nodes()
remote_folder = '/root' if user == 'root' else '/home/' + user
info('Setting up passwordless SSH across nodes [%s] for user [%s]' % (','.join(nodes), user))
remotes = [SSHRemote(node, user=user, pwd='') for node in nodes]
for remote in remotes:
info('Setting up ssh on host [%s]' % remote.host)
remote.copy(TMP_SSH_DIR, remote_folder)
del_key_file()
ok('Success!')
if __name__ == '__main__':
main()
安装集群管理软件
在安装节点上安装 clustershell
yum --enablerepo=extras install epel-release
yum install clustershell
验证,-w 输入需要控制的主机,看能不能正常返回:
[root@gbase01 ~]# clush -w gbase0[1-3] hostname
gbase01: gbase01.gbase.cn
gbase02: gbase02.gbase.cn
gbase03: gbase03.gbase.cn
前提是配置了集群免密。
检查依赖软件
在每一个节点执行检查,最好是使用集群命令统一检查,具体需要哪些依赖,可以在 GBase 8a 安装包内的 dependRpms 文件内查看
yum update -y
rpm -qa | grep psmisc
rpm -qa | grep libcgroup
rpm -qa | grep python2
# ...
yum install -y libcgroup libcgroup-tools psmisc ncurses-libs libdb glibc keyutils-libs libidn libgpg-error libgomp libstdc++ libgcc python-libs libgcrypt nss-softokn-freebl
# 若python不为python2,设置默认版本为python2
# alternatives --set python /usr/bin/python2
python --version # 验证默认版本
磁盘分区为默认分区,分区格式为默认的xfs,swap为默认不改动。
CPU建议关闭超线程、关闭 CPU 自动降频,我这里是测试虚拟机,所以没有操作。
网络要求三个节点可以互通
防火墙设置
测试环境,防火墙关闭,如果是不能关闭则开放所有需要使用的端口,每一个节点都需要检查
systemctl status firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
检查端口
GBase 8a MPP Cluster 各服务使用的默认端口如下
组件名称 | 默认端口号 | 端口协议类型 | 端口含义 |
---|---|---|---|
Gcluster | 5258 | TCP | GCluster 集群节点对外提供服务的端口 |
Gnode | 5050 | TCP | Data 集群节点对外提供服务的端口 |
Gcware | 5918 | TCP/UDP | gcware 节点间通讯端口 |
gcware | 5919 | TCP | 外部连接 gcware 节点端口 |
syncServer | 5288 | TCP | syncServer 服务端 |
GcrecoverMonit | 6268 | TCP | Gcrecover 服务端口 |
数据远程导出端口 | 16066~16166 | TCP | 数据远程导出端口 |
lsof -i:5258 -i:5050 -i:5918 -i:5919 -i:5288 -i:6268
检查sshd服务
在每一个节点执行
systemctl status sshd.service
systemctl enable sshd.service
关闭SELinux
在每一个节点执行
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 查看是否修改成功
grep "^SELINUX=" /etc/selinux/config
setenforce 0 # 临时关闭
# 输入 getenforce 返回 Permissive 则是临时关闭状态,
# 重启后是 Disabled ,完全关闭状态。
getenforce
sestatus
设置虚拟内存为unlimited
在每一个节点执行
cat << EOF >> /etc/security/limits.conf
* soft as unlimited
* hard as unlimited
EOF
设置时钟同步服务chrony
在每一个节点执行
systemctl status chronyd.service
systemctl start chronyd.service
systemctl enable chronyd.service
这里为了方便,是直接启动,若是内网环境,或者大集群,最好是把集群内两台同步远程时钟节点,其他节点,同步这两台的时钟,这样会时钟偏差比较小。
安装 GBase 8a
创建DBA账户
在每一个节点执行创建
useradd gbase
echo "gbase:Gbase@2024" | chpasswd
所有节点DBA账户的用户名、密码、uid、gid要一致,否则安装失败
[root@gbase01 opt]# clush -w gbase0[1-3] id gbase
gbase02: uid=1001(gbase) gid=1001(gbase) groups=1001(gbase)
gbase03: uid=1001(gbase) gid=1001(gbase) groups=1001(gbase)
gbase01: uid=1001(gbase) gid=1001(gbase) groups=1001(gbase)
解压 GBase 8a 安装包
上传软件包gbase01:/opt,gbase01为主节点,并解压。解压需要安装bzip2
cd /opt
tar xjf GBase8a_MPP_Cluster-NoLicense-FREE-9.5.3.28.12-redhat7-x86_64.tar.bz2
chown -R gbase:gbase /opt/gcinstall
创建 GBase 8a 目录
在每一个节点执行操作
mkdir -p /opt/gbase
chown -R gbase:gbase /opt/gbase
设置系统环境 SetSysEnv.py
官方提供了一个 python 脚本 SetSysEnv.py 用来一键配置系统参数文件
配置系统参数 /etc/systcl.conf 文件
配置 /etc/security/limits.conf 文件
配置 /etc/pam.d/su 文件
配置 /etc/security/limits.d/*-nproc.conf 文件
配置 /etc/cgconfig.conf 文件
使用 clustershell 工具同步文件到每一个节点的 opt下,并执行
clush -w gbase0[1-3] --copy /opt/gcinstall/SetSysEnv.py --dest /opt/
clush -w gbase0[1-3] "python /opt/SetSysEnv.py --dbaUser=gbase --installPrefix=/opt/gbase --cgroup"
参数 | 描述 |
---|---|
--dbaUser=gbase | dba 用户为 gbase,demo.options中dbaUser与这必须一致 |
--installPrefix=/opt/gbase | 安装目录为 /opt/gbase,demo.options中installPrefix与这必须一致 |
--cgroup | 使用资源管理功能,由 libcgroup-tools 提供 |
修改安装配置 demo.options
在安装节点gbase01上操作
su - gbase
cp /opt/gcinstall/demo.options /opt/gcinstall/demo.options_bak
cat << EOF > /opt/gcinstall/demo.options
installPrefix= /opt/gbase
coordinateHost = 192.168.20.142,192.168.20.143,192.168.20.144
coordinateHostNodeID = 101,102,103
dataHost = 192.168.20.142,192.168.20.143,192.168.20.144
#existCoordinateHost =
#existDataHost =
#existGcwareHost=
gcwareHost = 192.168.20.142,192.168.20.143,192.168.20.144
gcwareHostNodeID = 201,202,203
dbaUser = gbase
dbaGroup = gbase
dbaPwd = 'Gbase@2024'
rootPwd = 'Gbase@2024'
#dbRootPwd = ''
#rootPwdFile = rootPwd.json
#characterSet = utf8
#sshPort = 22
EOF
说明:V953 和 V952 版本不同在于,gcware 模块可以单独部署,可以不与 gcluster 节点部署在一起了,demo.options 文件中多了gcware 相关参数(gcwareHost 和 gcwareHostNodeID)。 dbaPwd 是 gbase 账户的密码 rootPwd 是 root 账户的密码
执行安装脚本
在安装节点,需要在dba用户gbase下执行:
su - gbase
cd /opt/gcinstall
./gcinstall.py --silent=demo.options
cat dependRpms
根据提示,按Y下一步
显示安装完成:InstallCluster Successfully
若安装时显示:
command "lssubsys" not found on host [192.168.20.142]
command "lssubsys" not found on host [192.168.20.144]
command "lssubsys" not found on host [192.168.20.143]
Cgconfig service is not exist on host ['192.168.20.142', '192.168.20.143', '192.168.20.144'], resource manangement can not be used, continue ([Y,y]/[N,n])?
原因是因为系统没有CGroups,可以直接输入 y 继续,或者是安装 libcgroup-tools
查看结果
安装完成环境变了还未加载,需要重新登录一次。
退出登录,再次进入,使用gcadmin查看环境,gcware 和 gcluster 都需要为 OPEN 才是正常。
exit
su - gbase
gcadmin
显示:
[gbase@gbase01 ~]$ gcadmin
CLUSTER STATE: ACTIVE
======================================
| GBASE GCWARE CLUSTER INFORMATION |
======================================
| NodeName | IpAddress | gcware |
--------------------------------------
| gcware1 | 192.168.20.142 | OPEN |
--------------------------------------
| gcware2 | 192.168.20.143 | OPEN |
--------------------------------------
| gcware3 | 192.168.20.144 | OPEN |
--------------------------------------
========================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
========================================================
| NodeName | IpAddress | gcluster | DataState |
--------------------------------------------------------
| coordinator1 | 192.168.20.142 | OPEN | 0 |
--------------------------------------------------------
| coordinator2 | 192.168.20.143 | OPEN | 0 |
--------------------------------------------------------
| coordinator3 | 192.168.20.144 | OPEN | 0 |
--------------------------------------------------------
===============================================================
| GBASE CLUSTER FREE DATA NODE INFORMATION |
===============================================================
| NodeName | IpAddress | gnode | syncserver | DataState |
---------------------------------------------------------------
| FreeNode1 | 192.168.20.143 | OPEN | OPEN | 0 |
---------------------------------------------------------------
| FreeNode2 | 192.168.20.142 | OPEN | OPEN | 0 |
---------------------------------------------------------------
| FreeNode3 | 192.168.20.144 | OPEN | OPEN | 0 |
---------------------------------------------------------------
0 virtual cluster
3 coordinator node
3 free data node
设置分片信息
生成distribution(coordinator节点)
gcadmin distribution <gcChangelnfo.xml> <p number> [d number] [pattern 1 | 2]
gcChangelnfo.xml:是描述集群内节点和rack(机柜)对应关系的文件,默认存放于qcinstall目录
p:每个数据节点存放的主分片数量。注:pattern 1模式下,p的取值范围为:1<=p<rack内节点数。
d:每个主分片的备份数量,取值为0,1或2。默认值为1。
pattern:描述分片备份规则的模板。1为rack高可用,2为节点高可用。默认为 1。
创建数据的分布模式
1、修改在安装目录下 /opt/gcinstall/gcChangeInfo.xml 文件的 node 为一个 rack,修改后内容如下:
<?xml version="1.0" encoding="utf-8"?>
<servers>
<rack>
<node ip="192.168.20.143"/>
<node ip="192.168.20.142"/>
<node ip="192.168.20.144"/>
</rack>
</servers>
2、在安装目录下执行设置
gcadmin distribution gcChangeInfo.xml p 2 d 1 pattern 1
创建分布信息,每个节点存放2个主分片,每个分片有1个备份,rack高可用。
3、再次查看集群
出现了 DistributionId 列
[gbase@gbase01 gcinstall]$ gcadmin
CLUSTER STATE: ACTIVE
VIRTUAL CLUSTER MODE: NORMAL
======================================
| GBASE GCWARE CLUSTER INFORMATION |
======================================
| NodeName | IpAddress | gcware |
--------------------------------------
| gcware1 | 192.168.20.142 | OPEN |
--------------------------------------
| gcware2 | 192.168.20.143 | OPEN |
--------------------------------------
| gcware3 | 192.168.20.144 | OPEN |
--------------------------------------
========================================================
| GBASE COORDINATOR CLUSTER INFORMATION |
========================================================
| NodeName | IpAddress | gcluster | DataState |
--------------------------------------------------------
| coordinator1 | 192.168.20.142 | OPEN | 0 |
--------------------------------------------------------
| coordinator2 | 192.168.20.143 | OPEN | 0 |
--------------------------------------------------------
| coordinator3 | 192.168.20.144 | OPEN | 0 |
--------------------------------------------------------
=========================================================================================================
| GBASE DATA CLUSTER INFORMATION |
=========================================================================================================
| NodeName | IpAddress | DistributionId | gnode | syncserver | DataState |
---------------------------------------------------------------------------------------------------------
| node1 | 192.168.20.143 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node2 | 192.168.20.142 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
| node3 | 192.168.20.144 | 1 | OPEN | OPEN | 0 |
---------------------------------------------------------------------------------------------------------
查看分布信息 gcadmin showdistribution node:
[gbase@gbase01 gcinstall]$ gcadmin showdistribution node
Distribution ID: 1 | State: new | Total segment num: 6
====================================================================================================================================
| nodes | 192.168.20.143 | 192.168.20.142 | 192.168.20.144 |
------------------------------------------------------------------------------------------------------------------------------------
| primary | 1 | 2 | 3 |
| segments | 4 | 5 | 6 |
------------------------------------------------------------------------------------------------------------------------------------
|duplicate | 3 | 1 | 2 |
|segments 1| 5 | 6 | 4 |
====================================================================================================================================
数据库初始化以后可以在库中直接查询
或者是在数据库中查询:
gbase> show nodes;
+------------+----------------+-------+--------------+----------------+--------+-----------+
| Id | ip | name | primary part | duplicate part | status | datastate |
+------------+----------------+-------+--------------+----------------+--------+-----------+
| 2400495808 | 192.168.20.143 | node1 | n1,n4 | n3,n5 | online | 0 |
| 2383718592 | 192.168.20.142 | node2 | n2,n5 | n1,n6 | online | 0 |
| 2417273024 | 192.168.20.144 | node3 | n3,n6 | n2,n4 | online | 0 |
+------------+----------------+-------+--------------+----------------+--------+-----------+
3 rows in set (Elapsed: 00:00:00.00)
或者查询张表:
select * from information_schema.CLUSTER_TABLE_SEGMENTS a where table_schema='mydb' and table_name='t1';
数据库初始化
在管理节点使用 gccli -u root -p 登录数据库,执行初始化命令 initnodedatamap ,root默认密码为空
[gbase@gbase01 gcinstall]$ gccli -u root -p
Enter password:
GBase client Free Edition 9.5.3.28.12509af27. Copyright (c) 2004-2024, GBase. All Rights Reserved.
gbase> initnodedatamap;
Query OK, 1 row affected (Elapsed: 00:00:00.44)
gbase> exit
Bye
数据库安装完成
使用 GBase 8a
连接
gbase -D gbase -h 192.168.20.142 -u root -P 5258
gbase -Dmydb -hgbase01 -umyuser -pMy@2024 -P 5258
评论


热门帖子
- 12023-05-09浏览数:16913
- 22020-05-11浏览数:10342
- 32019-04-26浏览数:10325
- 42023-09-25浏览数:9753
- 52023-07-04浏览数:9542