SETUP » История » Версия 9
Версия 8 (Владимир Ипатов, 08.11.2012 01:19) → Версия 9/24 (Владимир Ипатов, 08.11.2012 01:24)
h1. SETUP
{{toc}}
Ensure both nodes are up.
If you planning to use the secondary network for SAN and DRBD synchronization, you
should configure secondary IP interfaces manually on both nodes at this time.
Log in to the first node via ssh. Due to lack of DNS there may be
a minute timeout before password prompt.
h2. NETWORK CONFIGURATION
Network configuration may be highly various.
Here we describe several schemas.
h3. Basic schema - one ethernet to all.
one ethernet, one subnet, internet connection provided by external (not in claster) router.
By default installer create bridge named xen-br0. You can customize parameters by editing /etc/network/interfaces.
In this case you must have nodes connected to gigabit ethernet switch.
By default it looks like:
<pre>
auto xen-br0
iface xen-br0 inet static
address 192.168.5.88
netmask 255.255.255.0
network 192.168.5.0
broadcast 192.168.5.255
gateway 192.168.5.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
# up ifconfig eth0 mtu 9000
# up ifconfig xen-br0 mtu 9000
</pre>
Important parameters besides ipv4 settings is:
bridge_ports eth0 - means that physical interface eth0 enslaved to this bridge.
up ifconfig eth0 mtu 9000
up ifconfig xen-br0 mtu 9000 - setting jumbo frame on bridge for more net speed and less cpu utilization.
It will be actual on interface where drbd link will be.
However, setting mtu higher than 1500 will cause problems with any network equipment that
doesn't support jumbo frames. That's the reason because it option commented out by default.
Also it is important to specify broadcast and network adresses - it will help automatically
fullfill /etc/ganeti/networks file(a file that specify networks for instances).
However, it ins't required.
h3. Default schema - two ethernets, one for interlink(ganeti interoperation+drbd link) and one for lan.
This schema suits most cases. It doesn't required gigabit switch, provide good performance and reliability.
Two gigabit network interfaces on nodes connected directly or via gigabit switch(if you want more than two nodes in cluster).
Other interfaces connected to lan. Routing, firewalling, dhcp, dns in lan performed by external router or server.
Lan failure doesn't affect cluster in this setup.
This is /etc/network/interfaces file for this setup:
<pre>auto xen-br0
iface xen-br0 inet static
address 192.168.236.1
netmask 255.255.255.0
network 192.168.236.0
broadcast 192.168.236.255
gateway 192.168.236.15
bridge_ports eth0
bridge_stp off
bridge_fd 0
# up ifconfig eth0 mtu 9000
# up ifconfig xen-br0 mtu 9000
auto xen-lan
iface xen-lan inet static
address 192.168.5.55
netmask 255.255.255.0
network 192.168.5.0
broadcast 192.168.5.255
bridge_ports eth1
bridge_stp off
bridge_fd 0
</pre>
xen-br0 used by ganeti interoperation and drbd link, it was configured in installer.
Also gateway and dns server was configured in installer - it will be our service instance(sci) address.
xen-lan used by lan connection, its configuration must be added by hand.
In this network configuration you must fill these variables in sci.conf:
NODE1_IP - already configured by installer. installer
NODE1_NAME - already configured by installer. installer
NODE2_IP - set interlink ip address of second node. e.g. 192.168.236.2
NODE2_NAME - set second node name. e.g. gnt2 name
NODE1_LAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-lan. 192.168.5.55 $NODE1_NAME-lan
NODE2_LAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-lan. e.g. 192.168.5.58 $NODE1_NAME-lan
CLUSTER_IP - cluster address in lan. Must not match any exist host address in lan. 192.168.5.35
CLUSTER_NAME - cluster name in lan. In will be available by dns name $CLUSTER_NAME.
h3. Mupltiple bridges with routing, firewalling and wan access.
Here is a bit more complicated network setup.
In this setup we have, for example, two private netwokrs and wan by ethernet. All routing and firewalling
performed by separate firewall instance in our cluster. This setup fit when you don't have expensive hardware routers and firewalls.
This is /etc/network/interfaces file in this setup:
<pre>
auto lan
iface lan inet static
address 192.168.21.10
netmask 255.255.255.0
bridge_ports eth0
bridge_stp off
bridge_fd 0
auto dmz
iface dmz inet static
address 192.168.20.10
netmask 255.255.255.0
gateway 192.168.20.1
bridge_ports eth1
bridge_stp off
bridge_fd 0
up ifconfig eth1 mtu 9000
up ifconfig dmz mtu 9000
auto wan1
iface wan1 inet manual
bridge_ports eth2
bridge_stp off
bridge_fd 0
</pre>
In this example we have separate lan interfaces, dmz interface(it isn't actually dmz,
it just named this) and wan interface. dmz interface - ganeti master dev and drbd link
interfase, so there is mtu 9000.
Also in this example you must edit MASTER_NETDEV and LINK_NETDEV in /etc/sci/sci.conf from default xen-br0 to dmz.
There is no address in wan for hypervisor, although we recommend you to get subnet from
your ISP in order to assign IP addresses to nodes to management it even if router instance
is down.
Here is an example /etc/network/interfaces in router instance:
<pre>
auto eth0
iface eth0 inet static
address 192.168.20.1
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 192.168.21.1
netmask 255.255.255.0
auto eth2
iface eth2 inet static
address 1.1.1.2
netmask 255.255.255.0
address 1.1.1.1
</pre>
Where eth0 linked to bridge dmz, eth1 linked to lan, eth2 linked to wan.
h3. Datacenter schema - separate interfaces for lan, ganeti interoperation, drbd link.
If you have powerful networking infrastructure
h3. VLAN schema
If you have managed switches, you can set networking with VLANs.
You should add something like this for each VLAN:
<pre>
auto eth0.55
iface eth0.55 inet manual
up ifconfig eth0.55 up
auto bridge-example-vlan
iface bridge-example-vlan inet manual
up brctl addbr bridge-example-vlan
up brctl addif bridge-example-vlan eth0.55
up brctl stp bridge-example-vlan off
up ifconfig bridge-example-vlan up
down ifconfig bridge-example-vlan down
down brctl delbr bridge-example-vlan
</pre>
Where 55 - VLAN number.
In this example node don't have an ip address in this VLAN, although you could
assign an ip to bridge just like standard bridge.
Alternative schema is:
<pre>
auto vlan55
iface vlan55 inet manual
vlan_raw_device eth0
auto bridge-example-vlan
iface bridge-example-vlan inet manual
bridge_ports vlan55
bridge_stp off
bridge_fd 0
</pre>
It do the same, but in another way.
h2. DEFINING ENVIRONMENT
Edit @/etc/sci/sci.conf@
Most of values rely of your network setup. In section network setup it was described for most cases.
Here is additional notes about sci.conf configuring:
* You should specify node1 and node2 data as you have installed them.
*NOTE*: You can setup the cluster even with one node. In this case just leave NODE2_
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
the procedures.
* You should specify the cluster's name and IP.
* NODE#_SAN_IP should be specified on both nodes or none.
* NODE#_LAN_IP should be specified on both nodes or none.
* If you haven't Internet uplink or have a local package mirrors, you should correct
APT_ - settings.
* If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
(note trailing ';').
* MASTER_NETDEV - master interface name for cluster address. Auto-detected by default.
* LAN_NETDEV - Network interface to bind to virtual machies by default. Auto-detected by default.
* RESERVED_VOLS - list of volumes ignored by ganeti. Comma separated. You must specify vg for all volumes in this list.
h2. SETUP CLUSTER
Issue:
<pre>
# sci-setup cluster
</pre>
Check and confirm settings printed.
The process will go on.
Next you will be prompted to accept ssh key from node2 and for the root's password to node2.
On finish you will look something like this:
<pre>
Verify
Wed Jan 12 15:36:10 2011 * Verifying global settings
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
Wed Jan 12 15:36:11 2011 * Verifying node status
Wed Jan 12 15:36:11 2011 * Verifying instance status
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
Wed Jan 12 15:36:11 2011 * Other Notes
Wed Jan 12 15:36:11 2011 * Hooks Results
Node DTotal DFree MTotal MNode MFree Pinst Sinst
gnt1.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0
gnt2.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0
If all is ok, proceed with /usr/local/sbin/sci-setup service
</pre>
h2. SETUP SERVICE INSTANCE
The service instance is named 'sci' and have a few aliases.
On setup, it's IP address is determined from @/etc/resolv.conf@ of your first node.
This instance will be hardcoded in @/etc/hosts@ file of all cluster nodes and instances.
Issue:
<pre>
# sci-setup service
</pre>
You'll see the progress of DRBD syncing disks, then the message
<pre>
* running the instance OS create scripts...
</pre>
appears. The further may take a while. The process finishes with
<pre>
* starting instance...
</pre>
message.
Now you can log on to the sci instance using:
<pre>
# gnt-instance console sci
</pre>
Log in as root, the password is empty.
*NOTE*: Due to empty password all remote connections to new instance is prohibited.
You should change password and install @openssh-server@ package manually after
successful bootstrap procedure.
h2. SERVICE INSTANCE BOOTSTRAP
The system will setup itself via puppet. This is the iterative process. You can monitor
it by looking into @/var/log/daemon.log@. At start there is no @less@ command yet, so
you can use @more@, @cat@, @tail@ or @tail -f@ until @less@ will be auto-installed.
By default the iterations are repeated in 20 minutes. To shorten the wait time you can
issue
<pre>
# /etc/init.d/puppet restart
</pre>
and then look into @daemon.log@ how it finishes.
Repeat this a few times until puppet will do nothing in turn.
h2. PREPARING FOR NEW INSTANCES
New instances are created just by regular Ganeti commands such as:
<pre>
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
</pre>
Altought, some tuning hooks are provided by SCI-CD project:
# Each instance has installed @puppet@ for autoconfiguration and @openssh-client@ for file transfers etc.
# The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
# The instance's network interfaces may be set up automatically as described below.
h3. INSTANCE INTERFACE AUTOCONFIGURATION
If your instances may sit on several networks and you need static addressing in them, you should fulfill
the file @/etc/ganeti/networks@ with all known networks you want to attach your instances.
Each line in the file has format
|NETWORK|NETMASK|BROADCAST|GATEWAY|
Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
instance and fulfill it's @/etc/network/interfaces@ accordingly.
*NOTE*: If you have only one default network, you shouldn't care because it's data are preinstalled.
*NOTE*: networks file must be copied to all cluster nodes (not automated yet).
h2. SCI OPERATIONS
Read [[OPERATIONS]] next.
{{toc}}
Ensure both nodes are up.
If you planning to use the secondary network for SAN and DRBD synchronization, you
should configure secondary IP interfaces manually on both nodes at this time.
Log in to the first node via ssh. Due to lack of DNS there may be
a minute timeout before password prompt.
h2. NETWORK CONFIGURATION
Network configuration may be highly various.
Here we describe several schemas.
h3. Basic schema - one ethernet to all.
one ethernet, one subnet, internet connection provided by external (not in claster) router.
By default installer create bridge named xen-br0. You can customize parameters by editing /etc/network/interfaces.
In this case you must have nodes connected to gigabit ethernet switch.
By default it looks like:
<pre>
auto xen-br0
iface xen-br0 inet static
address 192.168.5.88
netmask 255.255.255.0
network 192.168.5.0
broadcast 192.168.5.255
gateway 192.168.5.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
# up ifconfig eth0 mtu 9000
# up ifconfig xen-br0 mtu 9000
</pre>
Important parameters besides ipv4 settings is:
bridge_ports eth0 - means that physical interface eth0 enslaved to this bridge.
up ifconfig eth0 mtu 9000
up ifconfig xen-br0 mtu 9000 - setting jumbo frame on bridge for more net speed and less cpu utilization.
It will be actual on interface where drbd link will be.
However, setting mtu higher than 1500 will cause problems with any network equipment that
doesn't support jumbo frames. That's the reason because it option commented out by default.
Also it is important to specify broadcast and network adresses - it will help automatically
fullfill /etc/ganeti/networks file(a file that specify networks for instances).
However, it ins't required.
h3. Default schema - two ethernets, one for interlink(ganeti interoperation+drbd link) and one for lan.
This schema suits most cases. It doesn't required gigabit switch, provide good performance and reliability.
Two gigabit network interfaces on nodes connected directly or via gigabit switch(if you want more than two nodes in cluster).
Other interfaces connected to lan. Routing, firewalling, dhcp, dns in lan performed by external router or server.
Lan failure doesn't affect cluster in this setup.
This is /etc/network/interfaces file for this setup:
<pre>auto xen-br0
iface xen-br0 inet static
address 192.168.236.1
netmask 255.255.255.0
network 192.168.236.0
broadcast 192.168.236.255
gateway 192.168.236.15
bridge_ports eth0
bridge_stp off
bridge_fd 0
# up ifconfig eth0 mtu 9000
# up ifconfig xen-br0 mtu 9000
auto xen-lan
iface xen-lan inet static
address 192.168.5.55
netmask 255.255.255.0
network 192.168.5.0
broadcast 192.168.5.255
bridge_ports eth1
bridge_stp off
bridge_fd 0
</pre>
xen-br0 used by ganeti interoperation and drbd link, it was configured in installer.
Also gateway and dns server was configured in installer - it will be our service instance(sci) address.
xen-lan used by lan connection, its configuration must be added by hand.
In this network configuration you must fill these variables in sci.conf:
NODE1_IP - already configured by installer. installer
NODE1_NAME - already configured by installer. installer
NODE2_IP - set interlink ip address of second node. e.g. 192.168.236.2
NODE2_NAME - set second node name. e.g. gnt2 name
NODE1_LAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-lan. 192.168.5.55 $NODE1_NAME-lan
NODE2_LAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-lan. e.g. 192.168.5.58 $NODE1_NAME-lan
CLUSTER_IP - cluster address in lan. Must not match any exist host address in lan. 192.168.5.35
CLUSTER_NAME - cluster name in lan. In will be available by dns name $CLUSTER_NAME.
h3. Mupltiple bridges with routing, firewalling and wan access.
Here is a bit more complicated network setup.
In this setup we have, for example, two private netwokrs and wan by ethernet. All routing and firewalling
performed by separate firewall instance in our cluster. This setup fit when you don't have expensive hardware routers and firewalls.
This is /etc/network/interfaces file in this setup:
<pre>
auto lan
iface lan inet static
address 192.168.21.10
netmask 255.255.255.0
bridge_ports eth0
bridge_stp off
bridge_fd 0
auto dmz
iface dmz inet static
address 192.168.20.10
netmask 255.255.255.0
gateway 192.168.20.1
bridge_ports eth1
bridge_stp off
bridge_fd 0
up ifconfig eth1 mtu 9000
up ifconfig dmz mtu 9000
auto wan1
iface wan1 inet manual
bridge_ports eth2
bridge_stp off
bridge_fd 0
</pre>
In this example we have separate lan interfaces, dmz interface(it isn't actually dmz,
it just named this) and wan interface. dmz interface - ganeti master dev and drbd link
interfase, so there is mtu 9000.
Also in this example you must edit MASTER_NETDEV and LINK_NETDEV in /etc/sci/sci.conf from default xen-br0 to dmz.
There is no address in wan for hypervisor, although we recommend you to get subnet from
your ISP in order to assign IP addresses to nodes to management it even if router instance
is down.
Here is an example /etc/network/interfaces in router instance:
<pre>
auto eth0
iface eth0 inet static
address 192.168.20.1
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 192.168.21.1
netmask 255.255.255.0
auto eth2
iface eth2 inet static
address 1.1.1.2
netmask 255.255.255.0
address 1.1.1.1
</pre>
Where eth0 linked to bridge dmz, eth1 linked to lan, eth2 linked to wan.
h3. Datacenter schema - separate interfaces for lan, ganeti interoperation, drbd link.
If you have powerful networking infrastructure
h3. VLAN schema
If you have managed switches, you can set networking with VLANs.
You should add something like this for each VLAN:
<pre>
auto eth0.55
iface eth0.55 inet manual
up ifconfig eth0.55 up
auto bridge-example-vlan
iface bridge-example-vlan inet manual
up brctl addbr bridge-example-vlan
up brctl addif bridge-example-vlan eth0.55
up brctl stp bridge-example-vlan off
up ifconfig bridge-example-vlan up
down ifconfig bridge-example-vlan down
down brctl delbr bridge-example-vlan
</pre>
Where 55 - VLAN number.
In this example node don't have an ip address in this VLAN, although you could
assign an ip to bridge just like standard bridge.
Alternative schema is:
<pre>
auto vlan55
iface vlan55 inet manual
vlan_raw_device eth0
auto bridge-example-vlan
iface bridge-example-vlan inet manual
bridge_ports vlan55
bridge_stp off
bridge_fd 0
</pre>
It do the same, but in another way.
h2. DEFINING ENVIRONMENT
Edit @/etc/sci/sci.conf@
Most of values rely of your network setup. In section network setup it was described for most cases.
Here is additional notes about sci.conf configuring:
* You should specify node1 and node2 data as you have installed them.
*NOTE*: You can setup the cluster even with one node. In this case just leave NODE2_
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
the procedures.
* You should specify the cluster's name and IP.
* NODE#_SAN_IP should be specified on both nodes or none.
* NODE#_LAN_IP should be specified on both nodes or none.
* If you haven't Internet uplink or have a local package mirrors, you should correct
APT_ - settings.
* If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
(note trailing ';').
* MASTER_NETDEV - master interface name for cluster address. Auto-detected by default.
* LAN_NETDEV - Network interface to bind to virtual machies by default. Auto-detected by default.
* RESERVED_VOLS - list of volumes ignored by ganeti. Comma separated. You must specify vg for all volumes in this list.
h2. SETUP CLUSTER
Issue:
<pre>
# sci-setup cluster
</pre>
Check and confirm settings printed.
The process will go on.
Next you will be prompted to accept ssh key from node2 and for the root's password to node2.
On finish you will look something like this:
<pre>
Verify
Wed Jan 12 15:36:10 2011 * Verifying global settings
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
Wed Jan 12 15:36:11 2011 * Verifying node status
Wed Jan 12 15:36:11 2011 * Verifying instance status
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
Wed Jan 12 15:36:11 2011 * Other Notes
Wed Jan 12 15:36:11 2011 * Hooks Results
Node DTotal DFree MTotal MNode MFree Pinst Sinst
gnt1.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0
gnt2.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0
If all is ok, proceed with /usr/local/sbin/sci-setup service
</pre>
h2. SETUP SERVICE INSTANCE
The service instance is named 'sci' and have a few aliases.
On setup, it's IP address is determined from @/etc/resolv.conf@ of your first node.
This instance will be hardcoded in @/etc/hosts@ file of all cluster nodes and instances.
Issue:
<pre>
# sci-setup service
</pre>
You'll see the progress of DRBD syncing disks, then the message
<pre>
* running the instance OS create scripts...
</pre>
appears. The further may take a while. The process finishes with
<pre>
* starting instance...
</pre>
message.
Now you can log on to the sci instance using:
<pre>
# gnt-instance console sci
</pre>
Log in as root, the password is empty.
*NOTE*: Due to empty password all remote connections to new instance is prohibited.
You should change password and install @openssh-server@ package manually after
successful bootstrap procedure.
h2. SERVICE INSTANCE BOOTSTRAP
The system will setup itself via puppet. This is the iterative process. You can monitor
it by looking into @/var/log/daemon.log@. At start there is no @less@ command yet, so
you can use @more@, @cat@, @tail@ or @tail -f@ until @less@ will be auto-installed.
By default the iterations are repeated in 20 minutes. To shorten the wait time you can
issue
<pre>
# /etc/init.d/puppet restart
</pre>
and then look into @daemon.log@ how it finishes.
Repeat this a few times until puppet will do nothing in turn.
h2. PREPARING FOR NEW INSTANCES
New instances are created just by regular Ganeti commands such as:
<pre>
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
</pre>
Altought, some tuning hooks are provided by SCI-CD project:
# Each instance has installed @puppet@ for autoconfiguration and @openssh-client@ for file transfers etc.
# The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
# The instance's network interfaces may be set up automatically as described below.
h3. INSTANCE INTERFACE AUTOCONFIGURATION
If your instances may sit on several networks and you need static addressing in them, you should fulfill
the file @/etc/ganeti/networks@ with all known networks you want to attach your instances.
Each line in the file has format
|NETWORK|NETMASK|BROADCAST|GATEWAY|
Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
instance and fulfill it's @/etc/network/interfaces@ accordingly.
*NOTE*: If you have only one default network, you shouldn't care because it's data are preinstalled.
*NOTE*: networks file must be copied to all cluster nodes (not automated yet).
h2. SCI OPERATIONS
Read [[OPERATIONS]] next.