INSTALL » История » Версия 22
Версия 21 (Dmitry Chernyak, 24.02.2013 19:15) → Версия 22/45 (Dmitry Chernyak, 24.02.2013 19:43)
h1. INSTALL
{{toc}}
[[OVERVIEW]] | [[BUILD-ISO]] | [[INSTALL]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
[[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
h2. Get the ISO image
You can get the latest built ISO image here: http://sci-dev.skycover.ru/dist/SCI-CD-latest.iso
The older versions also are in the http://sci-dev.skycover.ru/dist/
Otherwise you can build your own ISO-image (for experienced users): [[BUILD-ISO]]
h2. Burn the ISO image
You burn it with your favorite tool (for ex. - k3b).
h2. Create bootable USB flash
At present this option is only available when you build the ISO image yourself.
Consult [[BUILD-ISO]].
h2. Minimal System Requirements
In real case you must choose hardware in according to real objectives.
For testing purposes minimal system requierements is:
* 1.2 GB 1GB RAM
* 50GB HDD
* 1Gbit ethernet for DRBD link (if you setup 2 or more nodes)
* Hardware Virtualization support: Intel VT or AMD CPU (if you want to run the Windows instances)
h2. Install the system
Install the system on two nodes (you may use only one node, but high
availability capabilities will be unavailable).
You must to know following data at this time:
* The number and the mask of the interlink local network
* The IP addresses of two installed nodes in (in the interlink network same network)
* The internet router's address in (in the interlink network (typically no router will be needed here) same network, if any)
* The cluster's domain name
* The hostnames for new nodes
* The address of the future DNS serverin the interlink network (*NOT server(*NOT an already existent host!*), which will be installed on the
cluster service instance.
* The root's password
* The idea on how to create storage system - to use software MD or hardware RAID raid
or no RAID raid at all (not all(not recommended) - on your own choice.
Network topology configuration is complicated question, it described here: [[SETUP#NETWORK-CONFIGURATION]]
h3. Partitioning
During the installation you will be prompted to chose the one of predefined disk layouts
useful for 1-8 disk nodes with or without the hardware RAID.
If you chose the "manual layout" option then you You should preserve following partitions on your nodes:
|_.Name|_.Size|_.Purpose|
|/|10G|Root partition for all node's data|
|swap|1G|Swap space for Dom0 in calculation of 512Mb RAM|
|xenvg|the rest|The LVM Volume, named 'xenvg', without any partitions|
The VG "xenvg" will be used as the default place for the instance's volumes.
Also the "system-stuff" volume will be automatically created on xenvg (and mounted as /stuff).
It serves for various big stuff like cdrom images, instance backups etc.
At our own choice you may defer the creation of xenvg during the installation stage and to create it later.
In this case any option with "no LVM" will be useful.
NOTE: If you want to place your own partitions on xenvg, you should then exclude them from ganeti-managed volumes. (SEE [[SETUP]]).
Simplest way is to name your partitions "sytem-*" because this pattern is excluded in SCI-CD by default.
You may create more partitions or volume groups at your choice.
h3. DNS server's address
The cluster has it's own DNS server, which is also the 'approx' and
'puppet' server. At the [[SETUP]] stage, cluster DNS server can be linked
to other DNS servers in the forwarding mode.
The DNS server's address +must not+ be the address of the existing service.
The cluster's domain name +must not+ must not be the name of the existing domain if
local domain already exists (use subdomain, or completely different name).
h2. Automatic post-installation changes
During the installation phase, the postinst.sh script from the distro
made the following system tuning: [[POST-INSTALL]]
h2. The cluster is ready to setup
The CD installation takes about 15 minutes per node.
Read [[SETUP]] next.
{{toc}}
[[OVERVIEW]] | [[BUILD-ISO]] | [[INSTALL]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
[[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
h2. Get the ISO image
You can get the latest built ISO image here: http://sci-dev.skycover.ru/dist/SCI-CD-latest.iso
The older versions also are in the http://sci-dev.skycover.ru/dist/
Otherwise you can build your own ISO-image (for experienced users): [[BUILD-ISO]]
h2. Burn the ISO image
You burn it with your favorite tool (for ex. - k3b).
h2. Create bootable USB flash
At present this option is only available when you build the ISO image yourself.
Consult [[BUILD-ISO]].
h2. Minimal System Requirements
In real case you must choose hardware in according to real objectives.
For testing purposes minimal system requierements is:
* 1.2 GB 1GB RAM
* 50GB HDD
* 1Gbit ethernet for DRBD link (if you setup 2 or more nodes)
* Hardware Virtualization support: Intel VT or AMD CPU (if you want to run the Windows instances)
h2. Install the system
Install the system on two nodes (you may use only one node, but high
availability capabilities will be unavailable).
You must to know following data at this time:
* The number and the mask of the interlink local network
* The IP addresses of two installed nodes in (in the interlink network same network)
* The internet router's address in (in the interlink network (typically no router will be needed here) same network, if any)
* The cluster's domain name
* The hostnames for new nodes
* The address of the future DNS serverin the interlink network (*NOT server(*NOT an already existent host!*), which will be installed on the
cluster service instance.
* The root's password
* The idea on how to create storage system - to use software MD or hardware RAID raid
or no RAID raid at all (not all(not recommended) - on your own choice.
Network topology configuration is complicated question, it described here: [[SETUP#NETWORK-CONFIGURATION]]
h3. Partitioning
During the installation you will be prompted to chose the one of predefined disk layouts
useful for 1-8 disk nodes with or without the hardware RAID.
If you chose the "manual layout" option then you You should preserve following partitions on your nodes:
|_.Name|_.Size|_.Purpose|
|/|10G|Root partition for all node's data|
|swap|1G|Swap space for Dom0 in calculation of 512Mb RAM|
|xenvg|the rest|The LVM Volume, named 'xenvg', without any partitions|
The VG "xenvg" will be used as the default place for the instance's volumes.
Also the "system-stuff" volume will be automatically created on xenvg (and mounted as /stuff).
It serves for various big stuff like cdrom images, instance backups etc.
At our own choice you may defer the creation of xenvg during the installation stage and to create it later.
In this case any option with "no LVM" will be useful.
NOTE: If you want to place your own partitions on xenvg, you should then exclude them from ganeti-managed volumes. (SEE [[SETUP]]).
Simplest way is to name your partitions "sytem-*" because this pattern is excluded in SCI-CD by default.
You may create more partitions or volume groups at your choice.
h3. DNS server's address
The cluster has it's own DNS server, which is also the 'approx' and
'puppet' server. At the [[SETUP]] stage, cluster DNS server can be linked
to other DNS servers in the forwarding mode.
The DNS server's address +must not+ be the address of the existing service.
The cluster's domain name +must not+ must not be the name of the existing domain if
local domain already exists (use subdomain, or completely different name).
h2. Automatic post-installation changes
During the installation phase, the postinst.sh script from the distro
made the following system tuning: [[POST-INSTALL]]
h2. The cluster is ready to setup
The CD installation takes about 15 minutes per node.
Read [[SETUP]] next.