Проект

Общее

Профиль

SETUP

OVERVIEW | INSTALL | SETUP | OPERATIONS | LICENSE
ОБЗОР | УСТАНОВКА | НАСТРОЙКА | ОПЕРАЦИИ | ЛИЦЕНЗИЯ

Ensure both nodes are up.

If you planning to use the separate network for SAN and DRBD synchronization, you
should configure the second IP interface manually on both nodes at this time.

Log in to the first node.
If you using ssh, there may be a minute timeout (due to lack of DNS)
before the server anwers you with the password prompt.

NETWORK CONFIGURATION

Network configuration may be highly various.
Here we describe the typical scheme with two interfaces , one for interlink (ganeti interoperation+drbd link)
and one for LAN.

This schema suits most cases. It doesn't require a gigabit switch, provide good performance and reliability.
Two gigabit network interfaces on the nodes are connected directly (if you want
more than two nodes in the cluster then use a gigabit switch).
Other interfaces connected to LAN.
Any LAN failure doesn't affect the cluster nodes in this setup.
This is /etc/network/interfaces file for this setup:

auto xen-br0
iface xen-br0 inet static
    address 192.168.236.1
    netmask 255.255.255.0
    network 192.168.236.0
    broadcast 192.168.236.255
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
#    up ifconfig eth0 mtu 9000
#    up ifconfig xen-br0 mtu 9000

auto xen-lan
iface xen-lan inet static
    address 192.168.5.55
    netmask 255.255.255.0
    network 192.168.5.0
    broadcast 192.168.5.255
    gateway 192.168.5.1
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0

xen-br0 used by ganeti interoperation and drbd link, it was configured by the installer.
Also the dns server and the gateway was configured by the installer - it will be our service instance(sci) address.
xen-lan used by lan connection, its configuration must be added by hands (the 'interfaces' contains a template).

In this network configuration you must fill these variables in sci.conf (described later):
NODE2_IP - set interlink IP address of second node. e.g. 192.168.2.2
NODE2_NAME - set second node name. e.g. gnt2
NODE1_LAN_IP - lan IP for the first node. It will be available by DNS name $NODE1_NAME-lan. e.g. 192.168.5.51
NODE2_LAN_IP - lan IP for second node. It will be available by DNS name $NODE2_NAME-lan. e.g. 192.168.5.52
CLUSTER_IP - cluster address in lan. Must not match any exist host address in LAN. e.g. 192.168.5.50
CLUSTER_NAME - cluster name in LAN.
SCI_LAN_IP - if you want the presence sci intance in your LAN, assign IP. e.g. 192.168.5.59

On the Network setup page you may look and pick the other schemes for different cases.

DEFINING ENVIRONMENT

Edit /etc/sci/sci.conf

Most of values rely of your network setup. In the section NETWORK SETUP it was described for the typical case.

Here is additional notes about sci.conf configuring:

  • You should specify NODE1 and NODE2 data as you have installed them.
    NOTE: You can setup the cluster even with one node. In this case just leave NODE2_
    lines as is. In fact this is a dangerous setup, so you will be warned about this duging
    the procedures.
  • You should specify the cluster's name and IP.
  • NODE#_SAN_IP should be specified on both nodes or none.
  • NODE#_LAN_IP should be specified on both nodes or none.
  • If you haven't Internet uplink or have a local package mirrors, you should correct
    APT_ - settings.
  • If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
    (note trailing ';').
  • MASTER_NETDEV - master interface name for cluster address. Auto-detected by default.
  • LAN_NETDEV - Network interface to bind to virtual machies by default. Auto-detected by default.
  • RESERVED_VOLS - list of volumes ignored by ganeti. Comma separated. You should specify vg for all volumes in this list. It is preset with reasonable default, needed for SCI-CD.

SETUP THE CLUSTER

Issue:

# sci-setup cluster

Check and confirm settings printed.

The process will go on.

Next you will be prompted to accept ssh key from node2 and for the root's password to node2.

On finish you will look something like this:

Verify
Wed Jan 12 15:36:10 2011 * Verifying global settings
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
Wed Jan 12 15:36:11 2011 * Verifying node status
Wed Jan 12 15:36:11 2011 * Verifying instance status
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
Wed Jan 12 15:36:11 2011 * Other Notes
Wed Jan 12 15:36:11 2011 * Hooks Results
Node                    DTotal  DFree MTotal MNode MFree Pinst Sinst
gnt1.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
gnt2.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
If all is ok, proceed with /usr/local/sbin/sci-setup service

SETUP THE SERVICE INSTANCE

The service instance is named 'sci' and have a few aliases.
On setup, it's IP address is determined from /etc/resolv.conf of your first node.
This instance will be hardcoded in /etc/hosts file of all cluster nodes and instances.

Issue:

# sci-setup service

You'll see the progress of DRBD syncing disks, then the message

* running the instance OS create scripts...

appears. The further may take a while. The process finishes with
* starting instance...

message.

Now you can log on to the sci instance using:

# gnt-instance console sci

Log in as root, the password is empty.
NOTE: Due to empty password all remote connections to new instance is prohibited.
You should change password and install openssh-server package manually after
successful bootstrap procedure.

SERVICE INSTANCE BOOTSTRAP

The system will setup itself via puppet. This is the iterative process. You can monitor
it by looking into /var/log/daemon.log. At start there is no less command yet, so
you can use more, cat, tail or tail -f until less will be auto-installed.

By default the iterations are repeated in 20 minutes. To shorten the wait time you can
issue

# /etc/init.d/puppet restart

and then look into daemon.log how it finishes.
Repeat this a few times until puppet will do nothing in turn. But be careful because
there is the gpg key generation procedure which may take a long time.

PREPARING FOR NEW INSTANCES

New instances are created just by regular Ganeti commands such as:

gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
Altought, some tuning hooks are provided by SCI-CD project:
  1. Each instance has installed puppet for autoconfiguration and openssh-client for file transfers etc.
  2. The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
  3. The instance's network interfaces may be set up automatically as described below.

INSTANCE INTERFACE AUTOCONFIGURATION

If your cluster have several networks attached to it and the instances may be placed to any of them
and you need static addressing in them, you should fulfill
the file /etc/ganeti/networks with all known networks you want to attach your instances.
Each line in the file has format

NETWORK NETMASK BROADCAST GATEWAY

Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
instance and fulfill it's /etc/network/interfaces accordingly.

NOTE: If you have only one default network, you shouldn't care because it's data are preinstalled.
NOTE: networks file must be copied to all cluster nodes (not automated yet).

SCI OPERATIONS

Read OPERATIONS next.