НАСТРОЙКА » История » Версия 2
« Предыдущее -
Версия 2/36
(Разница(diff)) -
Следующее » -
Текущая версия
Владимир Ипатов, 12.11.2012 01:58
НАСТРОЙКА¶
- НАСТРОЙКА
- Настройка сети
- Простая схема - один ethernet для всего
- Default schema - two ethernets, one for interlink(ganeti interoperation+drbd link) and one for lan.h3. Схема по умолчанию - 2 ethernet, один для интерлинка(взаимодействия ganeti и drbd линка) и один для локальной сети.
- Много бриджей с маршрутизацией и прямым доступом в интернет.
- Datacenter schema - separate interfaces for lan, ganeti interoperation, drbd link.
- VLAN schema
- DEFINING ENVIRONMENT
- SETUP CLUSTER
- SETUP SERVICE INSTANCE
- SERVICE INSTANCE BOOTSTRAP
- PREPARING FOR NEW INSTANCES
- SCI OPERATIONS
- Настройка сети
Перед началом настройки убедитесь, что обе ноды включены и функционируют.
Если Вы планируете использовать второй сетевой адаптер для drbd линка, перед инициализацией кластера Вы должны настроить его.
Войдите на первую ноду по ssh. Из-за недоступности DNS на ноде на данный момент, перед запросом пароля возможна
пауза до минуты.
Настройка сети¶
Схемы настройки сети могут сильно различаться.
Далее мы рассмотрим несколько распространенных примеров.
Простая схема - один ethernet для всего¶
Один интерфейс, одна подсеть, подключение к интернет предоставляется сторонним(не входящим в кластер) сетевым оборудованием.
В данной настройке обе ноды должны быть подключены к гигабитному свитчу.
По умолчанию установщик создает бридж xen-br0. Вы можете изменить параметры, изменяя /etc/network/interfaces.
Изначально он выглядит примерно так:
auto xen-br0 iface xen-br0 inet static address 192.168.5.88 netmask 255.255.255.0 network 192.168.5.0 broadcast 192.168.5.255 gateway 192.168.5.1 bridge_ports eth0 bridge_stp off bridge_fd 0 # up ifconfig eth0 mtu 9000 # up ifconfig xen-br0 mtu 9000
Важные параметры, помимо настроек ipv4:
bridge_ports eth0
Означает, какое сетевое устройство привязано к бриджу.
up ifconfig eth0 mtu 9000 up ifconfig xen-br0 mtu 9000
Настройка jumbo frame на бридже для большей скорости сети и меньшей нагрузки на процессор
Это актуально для интерфейса, на котором будет drbd линк.
Однако, включение jumbo frame приведет к проблемам со связью с любым сетевым оборудованием,
не поддерживающим это. Поэтому данные параметры закомментированы по умолчанию.
Default schema - two ethernets, one for interlink(ganeti interoperation+drbd link) and one for lan.
h3. Схема по умолчанию - 2 ethernet, один для интерлинка(взаимодействия ganeti и drbd линка) и один для локальной сети.¶
Эта схема предпочтительна для большинства ситуаций.
Она не требует наличия гигабитных свитчей, предоставляет неплохую надежность и производительность при низких затратах.
Два гигабитных интерфейса на нодах подключены друг к другу напрямую или через гигабитный свитч(если Вы хотите более чем 2 ноды в кластере).
Остальные интерфейсы подключаются в lan.
В этой схеме сбой в локальной сети не влияет на работоспособность кластера.
Пример конфигурации /etc/network/interfaces для этой схемы.
auto xen-br0 iface xen-br0 inet static address 192.168.236.1 netmask 255.255.255.0 network 192.168.236.0 broadcast 192.168.236.255 bridge_ports eth0 bridge_stp off bridge_fd 0 # up ifconfig eth0 mtu 9000 # up ifconfig xen-br0 mtu 9000 auto xen-lan iface xen-lan inet static address 192.168.5.55 netmask 255.255.255.0 network 192.168.5.0 broadcast 192.168.5.255 gateway 192.168.5.1 bridge_ports eth1 bridge_stp off bridge_fd 0
Бридж xen-br0 используется для drbd и ganeti обмена, он настроен при установке нод.
Также адрес днс сервера настроен установщиком - это будет адрес нашей сервисной машины(sci).
Бридж xen-lan используется для подключения к локальной сети и должен быть настроен вручную.
В такой конфигурации Вы должны заполнить следующие переменные в sci.conf:
NODE1_IP - Уже настроено установщиком.
NODE1_NAME - Уже настроено установщиком.
NODE2_IP - укажите ip адрес второй ноды на интерлинке(например, 192.168.236.2)
NODE2_NAME - укажите имя второй ноды(напр. gnt2)
NODE1_LAN_IP - ip адрес первой ноды в локальной сети. Он будет доступен в днс под именем $NODE1_NAME-lan.$DOMAIN
NODE2_LAN_IP - ip адрес второй ноды в локальной сети. Он будет доступен в днс под именем $NODE2_NAME-lan.$DOMAIN
CLUSTER_IP - Адрес кластера в локальной сети. НЕ должен совпадать с любым другим существующим адресом. Напр. 192.168.5.35.
CLUSTER_NAME - Имя кластера в локальной сети. Будет доступно в днс под $CLUSTER_NAME.
Много бриджей с маршрутизацией и прямым доступом в интернет.¶
Here is a bit more complicated network setup.
In this setup we have, for example, two private netwokrs and wan by ethernet. All routing and firewalling
performed by separate firewall instance in our cluster. This setup fit when you don't have expensive hardware routers and firewalls.
This is /etc/network/interfaces file in this setup:
auto lan iface lan inet static address 192.168.21.10 netmask 255.255.255.0 bridge_ports eth0 bridge_stp off bridge_fd 0 auto server iface server inet static address 192.168.20.10 netmask 255.255.255.0 gateway 192.168.20.1 bridge_ports eth1 bridge_stp off bridge_fd 0 up ifconfig eth1 mtu 9000 up ifconfig dmz mtu 9000 auto wan1 iface wan1 inet manual bridge_ports eth2 bridge_stp off bridge_fd 0
In this example we have separate lan interfaces, server interface(in this case servers separated from lan and
clients go to servers thru router) and wan interface. server interface - ganeti interoperation dev and drbd link
interfase, so there is mtu 9000.
Also in this example you must edit MASTER_NETDEV and LINK_NETDEV in /etc/sci/sci.conf from default xen-br0 to dmz.
There is no address in wan for hypervisor, although we recommend you to get subnet from
your ISP in order to assign IP addresses to nodes to management it even if router instance
is down.
Here is an example /etc/network/interfaces in router instance:
auto eth0 iface eth0 inet static address 192.168.20.1 netmask 255.255.255.0 auto eth1 iface eth1 inet static address 192.168.21.1 netmask 255.255.255.0 auto eth2 iface eth2 inet static address 1.1.1.2 netmask 255.255.255.0 address 1.1.1.1
Where eth0 linked to bridge dmz, eth1 linked to lan, eth2 linked to wan.
Datacenter schema - separate interfaces for lan, ganeti interoperation, drbd link.¶
If you have powerful networking infrastructure
Here we have separate interfaces for ganeti interoperation(in this case it may be named management interface)
auto mgmt iface mgmt inet static address 192.168.236.1 netmask 255.255.255.0 network 192.168.236.0 gateway 192.168.236.1 broadcast 192.168.236.255 bridge_ports eth0 bridge_stp off bridge_fd 0 auto xen-san iface xen-san inet static address 192.168.237.1 netmask 255.255.255.0 network 192.168.237.0 broadcast 192.168.237.255 bridge_ports eth1 bridge_stp off bridge_fd 0 up ifconfig eth1 mtu 9000 up ifconfig xen-san mtu 9000 auto xen-lan iface xen-lan inet manual bridge_ports eth2 bridge_stp off bridge_fd 0
In this example nodes don't have addresses in lan.
You must fill these vars in sci.conf to create cluster fits this network config:
NODE1_IP - already configured by installer.
NODE1_NAME - already configured by installer.
NODE2_IP - set interlink ip address of second node. e.g. 192.168.236.2
NODE2_NAME - set second node name. e.g. gnt2
NODE1_SAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-san. 192.168.237.1
NODE2_SAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-san. e.g. 192.168.237.2
CLUSTER_IP - cluster address in lan. Must not match any exist host address in lan. 192.168.236.35
CLUSTER_NAME - cluster name in lan. In will be available by dns name $CLUSTER_NAME.
SCI_LAN_IP - if you want presence sci intance in your lan, assign ip. e.g. 192.168.35.5
SCI_LAN_NETMASK - your nodes don't have addresses in lan, so you must enter netmask for this segment by hand. e.g. 255.255.255.0
SCI_LAN_GATEWAY - your nodes don't have addresses in lan, so you must enter gateway for this segment by hand. e.g. 192.168.35.1
Of course, it is easy to use VLANS in datacenter conditions. Next example will explain how. However, remember it is recommended
that drbd link must be on separate ethernet.
VLAN schema¶
If you have managed switches, you can set networking with VLANs.
You should add something like this for each VLAN:
auto eth0.55 iface eth0.55 inet manual up ifconfig eth0.55 up auto bridge-example-vlan iface bridge-example-vlan inet manual up brctl addbr bridge-example-vlan up brctl addif bridge-example-vlan eth0.55 up brctl stp bridge-example-vlan off up ifconfig bridge-example-vlan up down ifconfig bridge-example-vlan down down brctl delbr bridge-example-vlan
Where 55 - VLAN number.
In this example node don't have an ip address in this VLAN, although you could
assign an ip to bridge just like standard bridge.
Alternative schema is:
auto vlan55 iface vlan55 inet manual vlan_raw_device eth0 auto bridge-example-vlan iface bridge-example-vlan inet manual bridge_ports vlan55 bridge_stp off bridge_fd 0
It do the same, but in another way.
DEFINING ENVIRONMENT¶
Edit /etc/sci/sci.conf
Most of values rely of your network setup. In section network setup it was described for most cases.
Here is additional notes about sci.conf configuring:
- You should specify node1 and node2 data as you have installed them.
NOTE: You can setup the cluster even with one node. In this case just leave NODE2_
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
the procedures.
- You should specify the cluster's name and IP.
- NODE#_SAN_IP should be specified on both nodes or none.
- NODE#_LAN_IP should be specified on both nodes or none.
- If you haven't Internet uplink or have a local package mirrors, you should correct
APT_ - settings.
- If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
(note trailing ';').
- MASTER_NETDEV - master interface name for cluster address. Auto-detected by default.
- LAN_NETDEV - Network interface to bind to virtual machies by default. Auto-detected by default.
- RESERVED_VOLS - list of volumes ignored by ganeti. Comma separated. You must specify vg for all volumes in this list.
SETUP CLUSTER¶
Issue:
# sci-setup cluster
Check and confirm settings printed.
The process will go on.
Next you will be prompted to accept ssh key from node2 and for the root's password to node2.
On finish you will look something like this:
Verify Wed Jan 12 15:36:10 2011 * Verifying global settings Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes) Wed Jan 12 15:36:11 2011 * Verifying node status Wed Jan 12 15:36:11 2011 * Verifying instance status Wed Jan 12 15:36:11 2011 * Verifying orphan volumes Wed Jan 12 15:36:11 2011 * Verifying orphan instances Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy Wed Jan 12 15:36:11 2011 * Other Notes Wed Jan 12 15:36:11 2011 * Hooks Results Node DTotal DFree MTotal MNode MFree Pinst Sinst gnt1.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0 gnt2.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0 If all is ok, proceed with /usr/local/sbin/sci-setup service
SETUP SERVICE INSTANCE¶
The service instance is named 'sci' and have a few aliases.
On setup, it's IP address is determined from /etc/resolv.conf
of your first node.
This instance will be hardcoded in /etc/hosts
file of all cluster nodes and instances.
Issue:
# sci-setup service
You'll see the progress of DRBD syncing disks, then the message
* running the instance OS create scripts...
appears. The further may take a while. The process finishes with
* starting instance...
message.
Now you can log on to the sci instance using:
# gnt-instance console sci
Log in as root, the password is empty.
NOTE: Due to empty password all remote connections to new instance is prohibited.
You should change password and install openssh-server
package manually after
successful bootstrap procedure.
SERVICE INSTANCE BOOTSTRAP¶
The system will setup itself via puppet. This is the iterative process. You can monitor
it by looking into /var/log/daemon.log
. At start there is no less
command yet, so
you can use more
, cat
, tail
or tail -f
until less
will be auto-installed.
By default the iterations are repeated in 20 minutes. To shorten the wait time you can
issue
# /etc/init.d/puppet restart
and then look into daemon.log
how it finishes.
Repeat this a few times until puppet will do nothing in turn.
PREPARING FOR NEW INSTANCES¶
New instances are created just by regular Ganeti commands such as:
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAMEAltought, some tuning hooks are provided by SCI-CD project:
- Each instance has installed
puppet
for autoconfiguration andopenssh-client
for file transfers etc. - The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
- The instance's network interfaces may be set up automatically as described below.
INSTANCE INTERFACE AUTOCONFIGURATION¶
If your instances may sit on several networks and you need static addressing in them, you should fulfill
the file /etc/ganeti/networks
with all known networks you want to attach your instances.
Each line in the file has format
NETWORK | NETMASK | BROADCAST | GATEWAY |
Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
instance and fulfill it's /etc/network/interfaces
accordingly.
NOTE: If you have only one default network, you shouldn't care because it's data are preinstalled.
NOTE: networks file must be copied to all cluster nodes (not automated yet).
SCI OPERATIONS¶
Read OPERATIONS next.