ACL复制

将一个目录的ACL设置复制到另一个目录上可以执行以下命令

如复制 d1 的ACL到d2

getfacl d1 |setfacl -b -R -n -M - d2

注意,不要写绝对路径setfacl有时会报错
如d1的位置为/root/d1,则建议cd到/或/root再写相对路径

Centos7下配置ceph,包括object storage和cephfs

Centos7的配置方法找了很多,也尝试了好多次,发现每一家的方法都有残缺,最后自己汇总一下

  • 我的环境

两台机器,一台做admin,一台做storage

都安装好了centos7并配置了IP保证两台机器间可以互相ping通

 

  • 两个节点上都要执行的命令

安装NTP client同步时间

yum -y install ntpdate
ntpdate time.windows.com

 

配置hosts,后以简化后面的命令

echo "10.4.10.233 storage">> /etc/hosts #for storage
echo "10.4.10.234 admin">> /etc/hosts #for admin

 

修改hostname为对应的节点名字,centos7比较方便,直接改/etc/hostname就可以了,改好后需重新login

vi /etc/hostname

 

禁用 SELINUX

setenforce 0
sed -i s'/SELINUX.*=.*enforcing/SELINUX=disabled'/g /etc/selinux/config

 

允许防火墙ceph对应端口

firewall-cmd --zone=public --add-port=6789/tcp --permanent
firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
firewall-cmd --reload

#也可以关闭防火墙

systemctl stop firewalld.service

 

添加ceph用户

useradd -m ceph
passwd ceph #Set password for ceph user

 

赋予ceph sudoer权限,用visudo命令在最尾处添加如下内容

Defaults:ceph !requiretty #To be able to run commands as superuser without tty
ceph ALL=(ALL) NOPASSWD:ALL #Allow this user to run commands as superuser without password

 

 

  • 在admin节点上用root账号执行以下命令

添加YUM源

rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

更新系统并安装ceph-deploy工具

yum update -y && yum install ceph-deploy -y

 

  • 在admin节点上用ceph账号执行以下命令

创建并复制SSH KEY允许其它节点直接访问

ssh-keygen -t rsa -P ''
ssh-keygen -t rsa -f .ssh/id_rsa -P ''
ssh-copy-id ceph@storage

 

修改~/.ssh/config,添加以下信息,用以指定SSH连接storage节点时使用ceph用户,注意这个文件的权限需要为600

Host storage
Hostname storage
User ceph

创建临时目录,用以存放ceph配置过程的临时文件,这个目录名可以随意

mkdir ~/ceph-cluster && cd ~/ceph-cluster

初始化ceph配置

ceph-deploy new storage

执行完上面的命令后,当前所以目录中会生成一些文件,编辑其中的ceph.conf,并在尾部添加以下内容,以允许外网访问

osd pool default size = 1
public network = 10.4.10.0/24

安装ceph,ceph-deploy会自动在指定的节点上安装,这个命令只需在admin节点上执行一次即可

ceph-deploy install admin storage

初始化monitor

ceph-deploy mon create-initial

同步storage节点的Key

ceph-deploy gatherkeys storage

 

#查看当前系统中的磁盘,为安装成功,请先删数据库上的分区

ceph-deploy disk list storage

当这个命令看到/dev/sdc unknown等字样时就可以了

 

初始化数据盘

ceph-deploy disk zap storage:sdb
ceph-deploy disk zap storage:sdc

 

准备OSD存储设备

ceph-deploy osd prepare storage:sdc:/dev/sdb

这个代表sdc是数据盘,sdb做为sdc的journal盘

当然你也可以执行

ceph-deploy osd prepare storage:sdc

这样的话,sdc会指定自己为自己的journal盘,但这样会严重影响性能

 

激活osd

ceph-deploy osd activate storage:/dev/sdc1:/dev/sdb1

 

分发配置信息到各节点

ceph-deploy admin admin storage

该命中会把配置文件复制到/etc/ceph目录下

设置正确的权限

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

 

创建mds

ceph-deploy mds create storage

 

查看状态

ceph -s

没有出现

HEALTH_ERROR就OK了
一般为HELTH_OK

 

  • 配置对象存储

创建数据池,poolname可以自行指定

ceph osd pool create poolname 128

 

就这么简单,测试一下

#PUT
echo "I am YQ:)" > test.txt
rados put test test.txt --pool=poolname

#LS
rados ls --pool=poolname

#GET
rm testfile.txt 
rados get --pool=poolname testfile testfile.txt
cat testfile.txt

#REMOVE
rados rm testfile --pool=poolname

 

#如何删除pool呢?

ceph osd pool delete poolname poolname --yes-i-really-really-mean-it

 

  • 配置cephfs

对于cephfs,也需要做一些创建工作,不然直接mount当前状态的ceph会报error=5 input/output error
创建两个数据池,分别作为cephfs的一般数据池和元数据池

ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128

 

创建cephfs

ceph fs new cephfs cephfs_metadata cephfs_data

 

查看各状态

ceph mds stat

显示e5: 1/1/1 up {0=storage=up:active}

ceph -s

cluster c89d8b75-0ade-499d-b472-923a6d5671af
health HEALTH_WARN
too many PGs per OSD (448 > max 300)
monmap e1: 1 mons at {storage=10.4.10.233:6789/0}
election epoch 1, quorum 0 storage
mdsmap e5: 1/1/1 up {0=storage=up:active}
osdmap e18: 1 osds: 1 up, 1 in
pgmap v37: 448 pgs, 4 pools, 1978 bytes data, 21 objects
37444 kB used, 1862 GB / 1862 GB avail
448 active+clean
client io 546 B/s wr, 2 op/s

ceph osd tree

ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.81999 root default
-2 1.81999 host storage
0 1.81999 osd.0 up 1.00000 1.00000
ssh-keygen -t rsa -P ”
ssh-keygen -t rsa -f .ssh/id_rsa -P ”
这时任何一台支持cephfs的客户端机器就可以通过以下命令mount这个cephfs,

mkdir /mnt/cephfs
mount -t ceph storage:6789:/ /mnt/cephfs -o name=admin,secret=AQABV4BVkn66AhAAdvNPjWuf9o1IZSYKMelT6Q==

secret的值取自/etc/ceph/ceph.client.admin.keyring

原生的centos7 kernel并没有打开cephfs,需要重编
但如果你想直接在某个节点上进行mount测试的话还是比较方便的,由于我们前面安装ceph时会下载kernel只需grub2-set-default 0再重启系统即会使用新的kernel该kernel默认打开了cephf

U盘安装Centos7

一直不觉得U盘安装系统是件困难的事,直到今天安装Centos7各种痛苦

现在记录一下,以备以后用到

首先U盘的制作,不要通过windows上的工具做,不要以为那个工具能做出win7启动盘就可以同样适用于centos7

最好的方法是找一台linux机器或虚拟机

dd bs=4M if=CentOS-7.0-1406-x86_64-DVD.iso of=/dev/sdd

 

安装时基本会出现 /dev/root does not exist的错,然后进入dracut无法继续

这时需要先ls /dev去确定一下U盘的设备号,假设确定是sdb

那么重启机器

在选择install centos7时按TAB键,有些硬件可能是按e键

inst.stage2=hd:LABEL=CentOS\x207\x20x86_64 quiet 

改为

inst.stage2=hd:/dev/sdb quiet

 

回车,即可正常安装

 

编译php_screw时报错[php_screw.lo]error 1

  1. yum install php-devel
  2. phpize
  3. ./configure
  4. vi my_screw.(随意修改)
  5. vi php_screw.c
    全文搜索并替换
    org_compile_file(file_handle, type); -> org_compile_file(file_handle, type TSRMLS_CC);
    GC(extended_info)=1; => CG(compiler_options) |=ZEND_COMPILE_EXTENDED_INFO;
  6. make

启动NFS服务时报svc_tli_create: could not open connection for tcp6/udp6

将/etc/netconfig中的tcp6 和udp6所在的行注释掉即可

udp        tpi_clts      v     inet     udp     -       -
tcp        tpi_cots_ord  v     inet     tcp     -       -
#udp6       tpi_clts      v     inet6    udp     -       -
#tcp6       tpi_cots_ord  v     inet6    tcp     -       -
rawip      tpi_raw       -     inet      -      -       -
local      tpi_cots_ord  -     loopback  -      -       -
unix       tpi_cots_ord  -     loopback  -      -       -

 

Python中使用urllib2时绑定不同的网卡

因为要没试load balance,找了台多网卡的机器,跑python测试程序,虽然跑了几个client,但发现数据始终是从同一网卡发送出去,导致load balance server始终把数据转发到同一个节点上
如果让python程序收发数据时绑定在指定的网卡上呢?
查了google,SO
终于找到解答方法
只需在urllib2等调用前加以下一段,即可

import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind(("10.4.10.194", 0))
return sock
socket.socket = bound_socket

 

在centos7上搭建openstack: 3,在Controller Node上安装identity Service

关于 identity service的组成有几个概念需要理解(user, credentials, authentication, token, tenant, service, endpoint, role),如果以后想通过http API来交互openstack的话,这些知识是必需的。

OpenStack Identity concepts – OpenStack Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 20 – juno

User
Digital representation of a person, system, or service who uses OpenStack cloud services. The Identity service validates that incoming requests are made by the user who claims to be making the call. Users have a login and may be assigned tokens to access resources. Users can be directly assigned to a particular tenant and behave as if they are contained in that tenant.

Credentials
Data that confirms the user’s identity. For example: user name and password, user name and API key, or an authentication token provided by the Identity Service.

Authentication
The process of confirming the identity of a user. OpenStack Identity confirms an incoming request by validating a set of credentials supplied by the user.

These credentials are initially a user name and password, or a user name and API key. When user credentials are validated, OpenStack Identity issues an authentication token which the user provides in subsequent requests.

Token
An alpha-numeric string of text used to access OpenStack APIs and resources. A token may be revoked at any time and is valid for a finite duration.

While OpenStack Identity supports token-based authentication in this release, the intention is to support additional protocols in the future. Its main purpose is to be an integration service, and not aspire to be a full-fledged identity store and management solution.

Tenant
A container used to group or isolate resources. Tenants also group or isolate identity objects. Depending on the service operator, a tenant may map to a customer, account, organization, or project.

Service
An OpenStack service, such as Compute (nova), Object Storage (swift), or Image Service (glance). It provides one or more endpoints in which users can access resources and perform operations.

Endpoint
A network-accessible address where you access a service, usually a URL address. If you are using an extension for templates, an endpoint template can be created, which represents the templates of all the consumable services that are available across the regions.

Role
A personality with a defined set of user rights and privileges to perform a specific set of operations.

In the Identity service, a token that is issued to a user includes the list of roles. Services that are being called by that user determine how they interpret the set of roles a user has and to which operations or resources each role grants access.

具体参见:http://docs.openstack.org/juno/install-guide/install/yum/content/keystone-concepts.html

1,创建数据库

mysql -u root -p

进入数据库命令行

创建keystone的database并分配用户及权限:

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone123';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone123';

2,安装keystone工具包

yum install openstack-keystone python-keystoneclient

3,编辑/etc/keystone/keystone.conf

[DEFAULT]
admin_token = ADMIN_TOKEN      #此处为一个随机字符串建议用 openssl rand -hex 10 生成

[database]
connection = mysql://keystone:keystone123@10.4.10.213/keystone

[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token

4,创建证书密钥以及设置目录权限

keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /var/log/keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl
su -s /bin/sh -c "keystone-manage db_sync" keystone

5,注册service并启动

systemctl enable openstack-keystone.service
systemctl start openstack-keystone.service

6,创建定时任务,用以管理token过期,这边将token的过期时间设为1小时

(crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/keystone

到此安装结束,下一章节就是配置了

 

在centos7上搭建openstack: 2,初始化Controller Node

由于只有4个server,且主要用于swift对象存储功能,所以打算server1作为 controller node+network node+compute node, server2,3,4作为对等的object storage node.

1, 准备数据库

  • 安装包,这边openstack已经推荐使用无闭源风险的mariadb而不是mysql,当然你也可以用mysql,其配置和使用方式是一样的
yum install mariadb mariadb-server MySQL-python
  • 编辑/etc/my.cnf
[mysqld]

bind-address = 10.4.10.231
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
  • 注册service并启动
systemctl enable mariadb.service
systemctl start mariadb.service
  • 设置root密码
mysql_secure_installation

**注意**,在我们的例子中所有密码都以用户名+123的规则来设置以免忘记

 

 

2,安装消息服务器

openstack官方支持rabbitMQ Qpid和zeroMQ, 我们这边当然使用最流行的rabbitMQ

  • 安装包
yum install rabbitmq-server
  • 注册service并启动
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
  • 设置rabbitMQ密码
rabbitmqctl change_password guest rabbit123

 

在centos7上搭建openstack: 1,准备环境

今天公司给了个任务,在4台刀片机上搭建一套完整的openstack。这个过程相信是漫长而复杂的,遇到的问题一定也不是一点两点。因此记录下来供以后参考。

1,网络拓扑:

network

 

每台server之间有内部网络在openstack中我们称为internal network以及外部与office机器相联的网络,称为external network.

这边的external network与openstack中的external network是有所区别的,因为我们的openstack只是用于公司内部测试,并不对外开放。

而其实中真正作为external network接入点的是server上的10.4.10.231网卡,其实server之所以配置10.4.10.x网段是为了方便连接

2,OS

每台机器安装centos7 minimal版本,并按图配好网络

3,安装工具包

需安装yum-plugin-priorities epel  rdo (openstack库) 和 openstack-selinuix

yum install yum-plugin-priorities

yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

yum install http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm

yum install openstack-selinux

执行yum upgrade 升级系统

这些工具包需要在每台server上都安装,之后的搭建过程中都会用到

 

如此环境就准备完毕了

Centos7安装过程中找不到硬盘

今天帮公司的服务器安装centos7

不知道为什么始终找不到硬盘,整整一下午,都在bios中找问题

最后求助google大神

原来是因为原来的硬盘是GPT分区造成的

果断用liveCD

parted /dev/sda
mklabel msdos
unit gb
mkpart primary 0 160
quit

搞定