ОБЗОР » История » Версия 9
Версия 8 (Dmitry Chernyak, 29.11.2012 02:16) → Версия 9/34 (Dmitry Chernyak, 22.02.2013 01:09)
h1. OVERVIEW
{{toc}}
[[INSTALL]] | [[BUILD-ISO]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
на Русском: [[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
h2. THE PROJECTS
"SkyCover Infrastructure" (*SCI*) is the multipurpose, mostly automated high
reliability availability virtual server infrastructure, equipped with automatic monitoring,
backup and audit processes.
"SCI-CD" is the deployment engine for cheap and flexible virtual high
reliability availability cluster, the ground level for *SCI*.
h2. INTRODUCTION
SCI-CD is created to make easy the deployment of the virtual infrastructure,
based on the Ganeti project (http://code.google.com/p/ganeti/).
With Ganeti you will be able to install the "farm" of several computers -
"nodes", each with it's own storage and to create the cluster environment over
them when each virtual instance will be started on the one node and backed up
on the another node with on-line disk synchronization (via DRBD) and fast
failover.
!cluster-sync.jpg!
This provides the way to made cheap redundant high reliable available systems.
Ganeti is also able to manage "regular", stand-alone virtual instances.
h2. SOFTWARE
SCI-CD is based on the Debian/GNU project and uses it's installer (simple-cdd)
and also it's package repositories.
Installing SCI-CD you'll not got a new thing - you'll got a regular Debian/GNU
Linux system, just by the easy way and tuned as complex cluster platform.
SCI-CD contains the minimum of original software, instead it is focused on the
proper usage of existing open source components.
h2. CONTENTS
SCI core platform consists of two Ganeti nodes and one *service instance* (virtual
machine, named *sci*).
The service instance provides:
* DNS for local zones, forwarders, hint (BIND9 chrooted),
* DHCP server for lan(disabled by default),
* Configuation management, able to tune up new nodes and instances (Puppet),
* Apt-proxy, with uplink to Debian mirrors and also with local repository,
copied from SCI-CD CD-ROM (Approx).
More virtual instances may be created and more functions may be assigned to them
using the regular Ganeti commands.
h2. HARDWARE
Minimal SCI core hardware is based on the two nodes which can be any two computers -
the major brands, such as HP, IBM, the "China brands", such as
SuperMicro or self-maiden servers or even workstations.
Of course, the performance of your system will be very dependent on the
hardware you have chosed - the CPU, memory, RAID subsystem - all will matters
as usual for the performance, but mostly - not for high reliability availability
capabilities.
The HA features are supported by providing two separate nodes with their own
storage subsystems and on-line storage synchronization between them. In the
most cases this level of redundancy is sufficient to cover the data losses and
service interrupts even with the cheap hardware.
Of course, the systems which can't be interrupted even for a few minutes should must
be sized and analyzed excluively.
h2. LOCAL AREA NETWORKING
All nodes should be attached to one common TCP/IP network segment via the main
interface (exceptions are possible - see Ganeti manuals).
The SCI setup supports 802.1q VLAN tagging, so you can easy give your nodes and
instances the interfaces in different network segments (the network switch with
802.1q support is required).
h2. STORAGE NETWORKING
The nodes need to be interconnected with the fast TCP/IP interlink to transfer
storage data (DRBD synchronization). 1GigE is recommended.
It is possible (and recommended) that the storage interlink should be
separated from the main TCP/IP access interfaces. In the other words, each
node should have a two network adapters and at least one of them should
support 1GigE.
For only two nodes the storage network may be constructed without an Ethernet
switch, using simple cat5e cable.
For the simple and demo tasks there is possible to use only one network adapter
on the node, but it *MUST* support 1GigE speed.
h2. INTERNET NETWORKING
In order to access to the Internet you should connect your local network segment
to the Internet router.
The simplest way is to use the separate router with NAT.
The more advanced way is to construct the router/firewall on the cluster's virtual
instance, probably with separate network interface. instance. This setup is not the part SCI-CD project but can be implemented on the
top of it.
h2. DEBIAN REPOSITORIES
The service instance provides "approx" apt proxy, which can be uplinked to
regular Debian mirrors or to the intermediate mirrors or apt proxies in your LAN.
In any case, the service instance's "approx" is loaded by the copy of the
repository from the SCI-CD CD-ROM. It can be used even in absence of the uplink
to the external sources.
The standard reposiporise list is pushed automatically into the sources.lists
of the puppet-client instances (which are by default all the nodes and
instances).
h2. INSTALLATION
Please read [[INSTALL]] for instructions.
{{toc}}
[[INSTALL]] | [[BUILD-ISO]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
на Русском: [[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
h2. THE PROJECTS
"SkyCover Infrastructure" (*SCI*) is the multipurpose, mostly automated high
reliability availability virtual server infrastructure, equipped with automatic monitoring,
backup and audit processes.
"SCI-CD" is the deployment engine for cheap and flexible virtual high
reliability availability cluster, the ground level for *SCI*.
h2. INTRODUCTION
SCI-CD is created to make easy the deployment of the virtual infrastructure,
based on the Ganeti project (http://code.google.com/p/ganeti/).
With Ganeti you will be able to install the "farm" of several computers -
"nodes", each with it's own storage and to create the cluster environment over
them when each virtual instance will be started on the one node and backed up
on the another node with on-line disk synchronization (via DRBD) and fast
failover.
!cluster-sync.jpg!
This provides the way to made cheap redundant high reliable available systems.
Ganeti is also able to manage "regular", stand-alone virtual instances.
h2. SOFTWARE
SCI-CD is based on the Debian/GNU project and uses it's installer (simple-cdd)
and also it's package repositories.
Installing SCI-CD you'll not got a new thing - you'll got a regular Debian/GNU
Linux system, just by the easy way and tuned as complex cluster platform.
SCI-CD contains the minimum of original software, instead it is focused on the
proper usage of existing open source components.
h2. CONTENTS
SCI core platform consists of two Ganeti nodes and one *service instance* (virtual
machine, named *sci*).
The service instance provides:
* DNS for local zones, forwarders, hint (BIND9 chrooted),
* DHCP server for lan(disabled by default),
* Configuation management, able to tune up new nodes and instances (Puppet),
* Apt-proxy, with uplink to Debian mirrors and also with local repository,
copied from SCI-CD CD-ROM (Approx).
More virtual instances may be created and more functions may be assigned to them
using the regular Ganeti commands.
h2. HARDWARE
Minimal SCI core hardware is based on the two nodes which can be any two computers -
the major brands, such as HP, IBM, the "China brands", such as
SuperMicro or self-maiden servers or even workstations.
Of course, the performance of your system will be very dependent on the
hardware you have chosed - the CPU, memory, RAID subsystem - all will matters
as usual for the performance, but mostly - not for high reliability availability
capabilities.
The HA features are supported by providing two separate nodes with their own
storage subsystems and on-line storage synchronization between them. In the
most cases this level of redundancy is sufficient to cover the data losses and
service interrupts even with the cheap hardware.
Of course, the systems which can't be interrupted even for a few minutes should must
be sized and analyzed excluively.
h2. LOCAL AREA NETWORKING
All nodes should be attached to one common TCP/IP network segment via the main
interface (exceptions are possible - see Ganeti manuals).
The SCI setup supports 802.1q VLAN tagging, so you can easy give your nodes and
instances the interfaces in different network segments (the network switch with
802.1q support is required).
h2. STORAGE NETWORKING
The nodes need to be interconnected with the fast TCP/IP interlink to transfer
storage data (DRBD synchronization). 1GigE is recommended.
It is possible (and recommended) that the storage interlink should be
separated from the main TCP/IP access interfaces. In the other words, each
node should have a two network adapters and at least one of them should
support 1GigE.
For only two nodes the storage network may be constructed without an Ethernet
switch, using simple cat5e cable.
For the simple and demo tasks there is possible to use only one network adapter
on the node, but it *MUST* support 1GigE speed.
h2. INTERNET NETWORKING
In order to access to the Internet you should connect your local network segment
to the Internet router.
The simplest way is to use the separate router with NAT.
The more advanced way is to construct the router/firewall on the cluster's virtual
instance, probably with separate network interface. instance. This setup is not the part SCI-CD project but can be implemented on the
top of it.
h2. DEBIAN REPOSITORIES
The service instance provides "approx" apt proxy, which can be uplinked to
regular Debian mirrors or to the intermediate mirrors or apt proxies in your LAN.
In any case, the service instance's "approx" is loaded by the copy of the
repository from the SCI-CD CD-ROM. It can be used even in absence of the uplink
to the external sources.
The standard reposiporise list is pushed automatically into the sources.lists
of the puppet-client instances (which are by default all the nodes and
instances).
h2. INSTALLATION
Please read [[INSTALL]] for instructions.