Проект

Общее

Профиль

ОБЗОР » История » Версия 12

Dmitry Chernyak, 24.02.2013 18:46

1 1 Dmitry Chernyak
h1. OVERVIEW
2 1 Dmitry Chernyak
3 1 Dmitry Chernyak
{{toc}}
4 1 Dmitry Chernyak
5 10 Dmitry Chernyak
[[OVERVIEW]] | [[INSTALL]] | [[BUILD-ISO]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
6 12 Dmitry Chernyak
[[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
7 1 Dmitry Chernyak
8 11 Dmitry Chernyak
h2. THE PROJECT
9 1 Dmitry Chernyak
10 1 Dmitry Chernyak
"SkyCover Infrastructure" (*SCI*) is the multipurpose, mostly automated high
11 9 Dmitry Chernyak
reliability virtual server infrastructure, equipped with automatic monitoring,
12 1 Dmitry Chernyak
backup and audit processes.
13 1 Dmitry Chernyak
14 1 Dmitry Chernyak
"SCI-CD" is the deployment engine for cheap and flexible virtual high
15 9 Dmitry Chernyak
reliability cluster, the ground level for *SCI*.
16 1 Dmitry Chernyak
17 1 Dmitry Chernyak
h2. INTRODUCTION
18 1 Dmitry Chernyak
19 1 Dmitry Chernyak
SCI-CD is created to make easy the deployment of the virtual infrastructure,
20 1 Dmitry Chernyak
based on the Ganeti project (http://code.google.com/p/ganeti/).
21 1 Dmitry Chernyak
22 1 Dmitry Chernyak
With Ganeti you will be able to install the "farm" of several computers -
23 1 Dmitry Chernyak
"nodes", each with it's own storage and to create the cluster environment over
24 1 Dmitry Chernyak
them when each virtual instance will be started on the one node and backed up
25 1 Dmitry Chernyak
on the another node with on-line disk synchronization (via DRBD) and fast
26 1 Dmitry Chernyak
failover.
27 1 Dmitry Chernyak
28 1 Dmitry Chernyak
!cluster-sync.jpg!
29 1 Dmitry Chernyak
30 9 Dmitry Chernyak
This provides the way to made cheap redundant high reliable systems.
31 1 Dmitry Chernyak
32 1 Dmitry Chernyak
Ganeti is also able to manage "regular", stand-alone virtual instances.
33 1 Dmitry Chernyak
34 1 Dmitry Chernyak
h2. SOFTWARE
35 1 Dmitry Chernyak
36 1 Dmitry Chernyak
SCI-CD is based on the Debian/GNU project and uses it's installer (simple-cdd)
37 1 Dmitry Chernyak
and also it's package repositories.
38 1 Dmitry Chernyak
Installing SCI-CD you'll not got a new thing - you'll got a regular Debian/GNU
39 1 Dmitry Chernyak
Linux system, just by the easy way and tuned as complex cluster platform.
40 1 Dmitry Chernyak
41 1 Dmitry Chernyak
SCI-CD contains the minimum of original software, instead it is focused on the
42 1 Dmitry Chernyak
proper usage of existing open source components.
43 1 Dmitry Chernyak
44 1 Dmitry Chernyak
h2. CONTENTS
45 1 Dmitry Chernyak
46 1 Dmitry Chernyak
SCI core platform consists of two Ganeti nodes and one *service instance* (virtual
47 5 Владимир Ипатов
machine, named *sci*).
48 1 Dmitry Chernyak
49 1 Dmitry Chernyak
The service instance provides:
50 1 Dmitry Chernyak
* DNS for local zones, forwarders, hint (BIND9 chrooted),
51 12 Dmitry Chernyak
* DHCP server for LAN and with DDNS (disabled by default),
52 1 Dmitry Chernyak
* Configuation management, able to tune up new nodes and instances (Puppet),
53 1 Dmitry Chernyak
* Apt-proxy, with uplink to Debian mirrors and also with local repository,
54 1 Dmitry Chernyak
copied from SCI-CD CD-ROM (Approx).
55 1 Dmitry Chernyak
56 1 Dmitry Chernyak
More virtual instances may be created and more functions may be assigned to them
57 1 Dmitry Chernyak
using the regular Ganeti commands.
58 1 Dmitry Chernyak
59 1 Dmitry Chernyak
h2. HARDWARE
60 1 Dmitry Chernyak
61 1 Dmitry Chernyak
Minimal SCI core hardware is based on the two nodes which can be any two computers -
62 1 Dmitry Chernyak
the major brands, such as HP, IBM, the "China brands", such as
63 1 Dmitry Chernyak
SuperMicro or self-maiden servers or even workstations.
64 1 Dmitry Chernyak
65 1 Dmitry Chernyak
Of course, the performance of your system will be very dependent on the
66 1 Dmitry Chernyak
hardware you have chosed - the CPU, memory, RAID subsystem - all will matters
67 12 Dmitry Chernyak
as usual for the performance, but mostly - NOT for the high reliability
68 1 Dmitry Chernyak
capabilities.
69 1 Dmitry Chernyak
70 12 Dmitry Chernyak
The high reliability features are supported by providing two separate nodes with
71 12 Dmitry Chernyak
their own storage subsystems and on-line storage synchronization between them. In the
72 9 Dmitry Chernyak
most cases this level of redundancy is sufficient to cover the data losses and
73 1 Dmitry Chernyak
service interrupts even with the cheap hardware.
74 1 Dmitry Chernyak
75 9 Dmitry Chernyak
h2. LOCAL AREA NETWORKING
76 1 Dmitry Chernyak
77 12 Dmitry Chernyak
All nodes should be attached to the same TCP/IP LAN segment (or segments)nto be able
78 12 Dmitry Chernyak
to launch the clustered virtual linstances.
79 1 Dmitry Chernyak
80 1 Dmitry Chernyak
The SCI setup supports 802.1q VLAN tagging, so you can easy give your nodes and
81 1 Dmitry Chernyak
instances the interfaces in different network segments (the network switch with
82 1 Dmitry Chernyak
802.1q support is required).
83 1 Dmitry Chernyak
84 1 Dmitry Chernyak
h2. STORAGE NETWORKING
85 1 Dmitry Chernyak
86 1 Dmitry Chernyak
The nodes need to be interconnected with the fast TCP/IP interlink to transfer
87 12 Dmitry Chernyak
storage data (DRBD synchronization). 1GigE will be sufficient usually.
88 1 Dmitry Chernyak
89 12 Dmitry Chernyak
It is possible (and recommended) to separate the storage interlink from the main
90 12 Dmitry Chernyak
TCP/IP access interfaces. In the other words, each node should have a two network
91 12 Dmitry Chernyak
adapters and at least one of them should support 1GigE.
92 1 Dmitry Chernyak
93 12 Dmitry Chernyak
For only two nodes the storage network may be implemented with the cat5e patch cord
94 12 Dmitry Chernyak
without an Ethernet switch.
95 1 Dmitry Chernyak
96 1 Dmitry Chernyak
For the simple and demo tasks there is possible to use only one network adapter
97 1 Dmitry Chernyak
on the node, but it *MUST* support 1GigE speed.
98 1 Dmitry Chernyak
99 1 Dmitry Chernyak
h2. INTERNET NETWORKING
100 1 Dmitry Chernyak
101 1 Dmitry Chernyak
In order to access to the Internet you should connect your local network segment
102 1 Dmitry Chernyak
to the Internet router.
103 1 Dmitry Chernyak
104 1 Dmitry Chernyak
The simplest way is to use the separate router with NAT.
105 1 Dmitry Chernyak
106 1 Dmitry Chernyak
The more advanced way is to construct the router/firewall on the cluster's virtual
107 9 Dmitry Chernyak
instance, probably with separate network interface.
108 1 Dmitry Chernyak
109 1 Dmitry Chernyak
h2. DEBIAN REPOSITORIES
110 1 Dmitry Chernyak
111 1 Dmitry Chernyak
The service instance provides "approx" apt proxy, which can be uplinked to
112 1 Dmitry Chernyak
regular Debian mirrors or to the intermediate mirrors or apt proxies in your LAN. 
113 1 Dmitry Chernyak
114 1 Dmitry Chernyak
In any case, the service instance's "approx" is loaded by the copy of the
115 1 Dmitry Chernyak
repository from the SCI-CD CD-ROM. It can be used even in absence of the uplink
116 1 Dmitry Chernyak
to the external sources.
117 1 Dmitry Chernyak
118 1 Dmitry Chernyak
The standard reposiporise list is pushed automatically into the sources.lists
119 1 Dmitry Chernyak
of the puppet-client instances (which are by default all the nodes and
120 1 Dmitry Chernyak
instances).
121 1 Dmitry Chernyak
122 1 Dmitry Chernyak
h2. INSTALLATION
123 1 Dmitry Chernyak
124 1 Dmitry Chernyak
Please read [[INSTALL]] for instructions.