Проект

Общее

Профиль

SETUP » История » Версия 2

Версия 1 (Dmitry Chernyak, 26.02.2011 22:12) → Версия 2/24 (Владимир Ипатов, 23.10.2012 16:05)

h1. SETUP

{{toc}}

Ensure both nodes are up.

If you planning to use the secondary network for SAN and DRBD synchronization, you
should configure secondary IP interfaces manually on both nodes at this time.

Log in to the first node via ssh. Due to lack of DNS there may be
a minute timeout before password prompt.

h2. NETWORK CONFIGURATION

Network configuration may be highly various.
Here we describe several schemas.

h2.
DEFINING ENVIRONMENT

Edit @/etc/sci/sci.conf@

* You should specify node1 and node2 data as you have installed them.
*NOTE*: You can setup the cluster even with one node. In this case just leave NODE2_
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
the procedures.

* You should specify the cluster's name and IP.

* NODE#_SAN_IP should be specified on both nodes or none.

* If you haven't Internet uplink or have a local package mirrors, you should correct
APT_ - settings.

* If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
(note trailing ';').

h2. SETUP CLUSTER

Issue:

<pre>
# sci-setup cluster
</pre>

Check and confirm settings printed.

The process will go on.

Next you will be prompted to accept ssh key from node2 and for the root's password to node2.

On finish you will look something like this:

<pre>
Verify
Wed Jan 12 15:36:10 2011 * Verifying global settings
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
Wed Jan 12 15:36:11 2011 * Verifying node status
Wed Jan 12 15:36:11 2011 * Verifying instance status
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
Wed Jan 12 15:36:11 2011 * Other Notes
Wed Jan 12 15:36:11 2011 * Hooks Results
Node DTotal DFree MTotal MNode MFree Pinst Sinst
gnt1.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0
gnt2.ganeti.example.org 100.0G 100.0G 1020M 379M 625M 0 0
If all is ok, proceed with /usr/local/sbin/sci-setup service
</pre>

h2. SETUP SERVICE INSTANCE

The service instance is named 'sci' and have a few aliases.
On setup, it's IP address is determined from @/etc/resolv.conf@ of your first node.
This instance will be hardcoded in @/etc/hosts@ file of all cluster nodes and instances.

Issue:

<pre>
# sci-setup service
</pre>

You'll see the progress of DRBD syncing disks, then the message
<pre>
* running the instance OS create scripts...
</pre>
appears. The further may take a while. The process finishes with
<pre>
* starting instance...
</pre>
message.

Now you can log on to the sci instance using:

<pre>
# gnt-instance console sci
</pre>

Log in as root, the password is empty.
*NOTE*: Due to empty password all remote connections to new instance is prohibited.
You should change password and install @openssh-server@ package manually after
successful bootstrap procedure.

h2. SERVICE INSTANCE BOOTSTRAP

The system will setup itself via puppet. This is the iterative process. You can monitor
it by looking into @/var/log/daemon.log@. At start there is no @less@ command yet, so
you can use @more@, @cat@, @tail@ or @tail -f@ until @less@ will be auto-installed.

By default the iterations are repeated in 20 minutes. To shorten the wait time you can
issue

<pre>
# /etc/init.d/puppet restart
</pre>

and then look into @daemon.log@ how it finishes.

Repeat this a few times until puppet will do nothing in turn.

h2. PREPARING FOR NEW INSTANCES

New instances are created just by regular Ganeti commands such as:

<pre>
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
</pre>

Altought, some tuning hooks are provided by SCI-CD project:
# Each instance has installed @puppet@ for autoconfiguration and @openssh-client@ for file transfers etc.
# The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
# The instance's network interfaces may be set up automatically as described below.

h3. INSTANCE INTERFACE AUTOCONFIGURATION

If your instances may sit on several networks and you need static addressing in them, you should fulfill
the file @/etc/ganeti/networks@ with all known networks you want to attach your instances.
Each line in the file has format

|NETWORK|NETMASK|BROADCAST|GATEWAY|

Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
instance and fulfill it's @/etc/network/interfaces@ accordingly.

*NOTE*: If you have only one default network, you shouldn't care because it's data are preinstalled.
*NOTE*: networks file must be copied to all cluster nodes (not automated yet).

h2. SCI OPERATIONS

Read [[OPERATIONS]] next.