OpenStack搭建10-计算节点扩展

显示全文:

OpenStack可以将多个计算节点的硬件资源,包括内存,硬盘,CPU等信息虚拟成一个资源池,我们在创建VM的时候只需要从这个计算资源池中申请对应的内存,硬盘,CPU的资源即可。而一台服务器的计算资源总是有限的,如果要进行资源的扩展,其实很简单,只需要再添加多台compute节点即可,openstack提供了便捷的横向扩展。具体的步骤和前面添加一台计算节点类似,并没有太大的区别。

扩展一个计算节点:

安装配置的过程类似于第一个计算节点,并没有太大的差别。

1),配置基础的底层网络:

root@compute2:~# cat /etc/network/interfaces
#This file describes the network interfaces available on your system
#and how to activate them. For more information, see interfaces(5).

#The loopback network interface
auto lo
iface lo inet loopback
auto em1 
iface em1 inet static
address 192.168.3.14
netmask 255.255.255.0
gateway 192.168.3.254
iface em1 inet manual
        up ip link set dev $IFACE up
        down ip link set dev $IFACE down

auto em2
iface em2 inet static
address 10.0.0.51
netmask 255.255.255.0

2),修改/etc/hosts配置文件,关联IP和name。

root@compute2:~# cat /etc/hosts
#127.0.0.1      localhost
#127.0.1.1      compute2

# controller
10.0.0.11       controller
#compute
10.0.0.31       compute
# block1
#10.0.0.21       block1
#compute1
10.0.0.41       compute1
#compute2
10.0.0.51       compute2


# The following lines are desirable for IPv6 capable hosts
#::1     localhost ip6-localhost ip6-loopback
#ff02::1 ip6-allnodes
#ff02::2 ip6-allrouters
root@compute2:~# 

做过hosts之后,测试一下controller和两个compute节点之间的连通性。

root@compute2:~# ping compute1
PING compute1 (10.0.0.41) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.41): icmp_seq=1 ttl=64 time=0.220 ms
64 bytes from compute1 (10.0.0.41): icmp_seq=2 ttl=64 time=0.212 ms
^C
--- compute1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.212/0.216/0.220/0.004 ms
root@compute2:~# ping compute2
PING compute2 (10.0.0.51) 56(84) bytes of data.
64 bytes from compute2 (10.0.0.51): icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from compute2 (10.0.0.51): icmp_seq=2 ttl=64 time=0.054 ms
^C
--- compute2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.054/0.055/0.056/0.001 ms
root@compute2:~# ping controller
PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.258 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.246 ms
^C
--- controller ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.246/0.252/0.258/0.006 ms

3),修改Ubuntu系统的dns服务器:

想要永久生效的话就需要修改/etc/resolvconf/resolv.conf/base这个配置文件。
和centos不同的是,如果我们只修改/etc/resolv.conf这个配置文件的话,每次重启服务器都会被刷新掉,不会永久保存。
修改过配置文件之后,执行命令:#resolvconf -u 可以立即生效配置文件,你会发现/etc/resolv.conf已经从/etc/resolvconf/resolv.conf/base这个文件中读取相关的配置。

root@compute2:~# cat  /etc/resolvconf/resolv.conf.d/base 
nameserver 114.114.114.114
root@compute2:~# cat  /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
root@compute2:~# resolvconf -u
root@compute2:~# cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 114.114.114.114

4),安装NTP服务,并运行时间同步:

1,安装NTP服务:
    #apt-get install ntp

2,修改/etc/ntp.conf的配置文件,为其添加  ntp server
    server controller  iburst
3,重启ntp服务
    #service ntp restart
4,查看同步的状态:

    # ntpq -c assoc
    ind assid status  conf reach auth condition  last_event cnt
    ===========================================================
      1 32829  963a   yes   yes  none  sys.peer    sys_peer  3
    root@block1:~# ntpq -c peers
         remote           refid      st t when poll reach   delay   offset  jitter
    ==============================================================================
    *controller      91.189.89.199    3 u  452 1024  377    0.187   -1.106   0.940

5),安装openstack的packages:

#apt-get install ubuntu-cloud-keyring
#echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
"trusty-updates/juno main" > /etc/apt/sources.list.d/cloudarchive-juno.list 

 更新系统上的packages:
#apt-get update && apt-get dist-upgrade  

6),安装配置nova的compute服务:

1,安装nova-compute
#apt-get install nova-compute sysfsutils 
2,修改/etc/nova/nova.conf的配置文件,配置文件的修改参照第一个compute节点,修改的内容不是很多。

7),在新添加的compute节点上添加网络服务。

1),安装网络服务:
#apt-get install nova-network nova-api-metadata 
2), 编辑nova.conf配置文件,参考之前贴出的配置文件,修改的地方不大。
3),重启服务:
#service nova-network restart
#service nova-api-metadata restart

4),需要创建一个软连接,将/usr/bin/nova-dhcpbridge这个二进制文件链接到/usr/local/bin/dhcpbridge下
#ln -s /usr/bin/nova-dhcpbridge 。
重启nova服务。
#service nova-network restart
#service nova-compute restart`

修改/etc/sysctl.conf 文件添加如下内容,实现地址的转发功能:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
激活应用:
#sysctl  -p

8),校验运行状况:

如果以上配置都正确的情况下,并在这个compute节点上运行vm的时候。我们查看网络方面的输出:
#route      //可以看到默认路由的出口是br100.
root@compute2:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         192.168.3.254   0.0.0.0         UG    0      0        0 br100
10.0.0.0        *               255.255.255.0   U     0      0        0 em2
192.168.3.0     *               255.255.255.0   U     0      0        0 br100
192.168.3.32    *               255.255.255.224 U     0      0        0 br100
192.168.122.0   *               255.255.255.0   U     0      0        0 virbr0

#brctl show 可以看到当前运行的所有vm的网络接口都桥接在br100这个接口上。


root@compute2:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br100           8000.842b2b614546       no              em1
                                                        vnet1
                                                        vnet2
                                                        vnet3
                                                        vnet4
                                                        vnet5
                                                        vnet6
                                                        vnet7
                                                        vnet8
virbr0          8000.000000000000       yes


接着在controller节点上查看当前的服务,可以看到compute2的nova-network和nova-compute两个服务都已经启动了,并且state都是up状态。
这代表添加的新的compute节点已经能够正常的使用了。

root@controller:~# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-cert        | controller | internal | enabled | up    | 2015-10-21T13:10:56.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2015-10-21T13:10:54.000000 | -               |
| 3  | nova-conductor   | controller | internal | enabled | up    | 2015-10-21T13:10:52.000000 | -               |
| 4  | nova-scheduler   | controller | internal | enabled | up    | 2015-10-21T13:11:00.000000 | -               |
| 6  | nova-compute     | compute    | nova     | enabled | up    | 2015-10-21T13:10:57.000000 | -               |
| 7  | nova-network     | compute    | internal | enabled | up    | 2015-10-21T13:10:52.000000 | -               |
| 10 | nova-compute     | compute1   | nova     | enabled | up    | 2015-10-21T13:10:51.000000 | None            |
| 11 | nova-compute     | compute2   | nova     | enabled | up    | 2015-10-21T13:10:57.000000 | None            |
| 12 | nova-network     | compute1   | internal | enabled | up    | 2015-10-21T13:10:51.000000 | -               |
| 13 | nova-network     | compute2   | internal | enabled | up    | 2015-10-21T13:10:55.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+