Проект

Общее

Профиль

INSTALL » История » Версия 30

Версия 29 (Dmitry Chernyak, 19.06.2016 20:59) → Версия 30/45 (Dmitry Chernyak, 20.06.2016 21:15)

h1. INSTALL

{{toc}}

[[OVERVIEW]] | [[INSTALL]] | [[SETUP]] | [[OPERATIONS]] | [[LICENSE]]
[[ОБЗОР]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[ЛИЦЕНЗИЯ]]

h2. Get the ISO image

Download the distribution disk, ready to install: *"Download ISO-image":https://sci.skycover.ru/projects/sci-cd/documents*
To
latest built ISO image in the "Documents":https://sci.skycover.ru/projects/sci-cd/documents section (to access it you should "register":https://sci.skycover.ru/account/register register on this site).

h2. Custom You can also build ISO-image

How to build
your own ISO-image described here: [[BUILD-ISO]] (for experienced users, who want to include their own works to the distribution set) users): [[BUILD-ISO]]

h2. Burn the ISO image

You burn it with your favorite tool (for ex. - k3b).

h2. Create bootable USB flash

At present this option is only available when you build the ISO image yourself.
Consult [[BUILD-ISO]].

h2. Minimal System Requirements

In real case you must choose hardware in according to real objectives.
For testing purposes minimal system requierements is:
* 1.2 GB RAM
* 50GB HDD
* 1Gbit ethernet for DRBD link (if you setup 2 or more nodes)
* Hardware Virtualization support: Intel VT or AMD CPU (if you want to run the Windows instances)

h2. Install the system

Install the system on two nodes (you may use only one node, but high
availability capabilities will be unavailable).

The installer will ask you:

* The node's IP-address on the interlink. For ex.: 192.168.2.1(2)
* The network mask. For ex.: 255.255.255.0
* The node's name. For ex.: gnt1(2)
* The domain. For ex.: gnt.yourcompany.com
* The default router's address (DO NOT specify if you use the separate network
card for the interlink!)
* The (new) DNS server's address on the interlink. For ex.: 192.168.1.254
* The root password
* Chose one of predefined disk layouts or layout it manually.
* Specify the disks on which yo want to place grub loader. Chose "md0" if you use
the software RAID for the system partition.
* Specify the additional kernel parameters (but usually you don't).

h3. Partitioning

During the installation you will be prompted to chose the one of predefined disk layouts
useful for 1-8 disk nodes with or without the hardware RAID.

If you chose the "manual layout" option then you should preserve following partitions on your nodes:
|_.Name|_.Size|_.Purpose|
|/|10G|Root partition for all node's data|
|swap|1G|Swap space for Dom0 in calculation of 512Mb RAM|
|xenvg|the rest or the another device|The LVM Volume, named 'xenvg', without any partitions|

The VG "xenvg" will be used as the default place for the instance's volumes.
Also the "system-stuff" volume will be automatically created on xenvg (and mounted as /stuff).
It serves for various big stuff like cdrom images, instance backups etc.

At our own choice you may defer the creation of xenvg during the installation stage and to create it later.
In this case any option with "no LVM" will be useful.

NOTE: If you want to place your own partitions on xenvg, you should
then exclude them from ganeti-managed volumes. (SEE [[SETUP]]).
Simplest way is to name your partitions "sytem-*" because this
pattern is excluded in SCI-CD by default.

You may create more partitions or volume groups at your choice.

h3. DNS server's address

The cluster has it's own DNS server, which is also the 'approx' and
'puppet' server. At the [[SETUP]] stage, cluster DNS server can be linked
to other DNS servers in the forwarding mode.

The DNS server's address +must not+ be the address of the existing service.

The cluster's domain name +must not+ be the name of the existing domain if
local domain already exists (use subdomain, or completely different name).

h2. Automatic post-installation changes

During the installation phase, the postinst.sh script from the distro
will apply over 40 system tweaks: [[POST-INSTALL]]

The CD installation takes about 15 minutes per node.

h2. The cluster is ready to setup

Read [[SETUP]] next.