OVERVIEW » История » Версия 29
Версия 28 (Dmitry Chernyak, 15.03.2016 01:31) → Версия 29/35 (Dmitry Chernyak, 15.03.2016 01:36)
h1. OVERVIEW
{{toc}}
[[OVERVIEW]] | [[BUILD-ISO]] | [[INSTALL]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
[[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
h2. THE PROJECT
"SkyCover Infrastructure" (*SCI*) is the multipurpose, mostly automated high
reliability virtual server infrastructure, equipped with automatic monitoring,
backup and audit processes.
"SCI-CD" is the deployment engine for cheap and flexible virtual high
reliability cluster, the ground level for *SCI*.
h2. INTRODUCTION
SCI-CD is created to make easy the deployment of the virtual infrastructure,
based on the Ganeti project (http://code.google.com/p/ganeti/).
With Ganeti you will be able to install the "farm" of several computers -
"nodes", each with it's own storage and to create the cluster environment over
them when each virtual instance will be started on the one node and backed up
on the another node with on-line disk synchronization (via DRBD) and fast
failover.
!cluster-sync.jpg!
This provides the way to made cheap redundant high reliable systems.
Ganeti is also able to manage "regular", stand-alone virtual instances.
h2. SOFTWARE
SCI-CD is based on the Debian/GNU project and uses it's installer (simple-cdd)
and also it's package repositories.
Installing SCI-CD you'll not got a new thing - you'll got a regular Debian/GNU
Linux system, just by the easy way and tuned as complex cluster platform.
SCI-CD contains the minimum of original software, instead it is focused on the
proper usage of existing open source components.
h2. CONTENTS
SCI core platform consists of two Ganeti nodes and one *service instance* (virtual
machine, named *sci*).
The service instance provides:
* DNS for local zones, forwarders, hint (BIND9 chrooted),
* DHCP server for LAN and with DDNS (disabled by default),
* Configuation management, able to tune up new nodes and instances (Puppet),
* Apt-proxy, with uplink to Debian mirrors and also with local repository,
copied from SCI-CD CD-ROM (Approx).
More virtual instances may be created and more functions may be assigned to them
using the regular Ganeti commands.
h2. HARDWARE
Minimal SCI core hardware is based on the two nodes which can be any two computers -
the major brands, such as HP, IBM, the "China brands", such as
SuperMicro or self-maiden servers or even workstations.
Of course, the performance of your system will be very dependent on the
hardware you have chosed - the CPU, memory, RAID subsystem - all will matters
as usual for the performance, but mostly - NOT for the high reliability
capabilities.
The high reliability features are supported by providing two separate nodes with
their own storage subsystems and on-line storage synchronization between them. In the
most cases this level of redundancy is sufficient to cover the data losses and
service interrupts even with the cheap hardware.
h2. LOCAL AREA NETWORKING
All nodes should be attached to the same TCP/IP LAN segment (or segments)nto be able
to launch the clustered virtual linstances.
The SCI setup supports 802.1q VLAN tagging, so you can easy give your nodes and
instances the interfaces in different network segments (the network switch with
802.1q support is required).
h2. STORAGE NETWORKING
The nodes need to be interconnected with the fast TCP/IP interlink to transfer
storage data (DRBD synchronization). 1GigE will be sufficient usually.
It is possible (and recommended) to separate the storage interlink from the main
TCP/IP access interfaces. In the other words, each node should have a two network
adapters and at least one of them should support 1GigE.
For only two nodes the storage network may be implemented with the cat5e patch cord
without an Ethernet switch.
For the simple and demo tasks there is possible to use only one network adapter
on the node, but it *MUST* support 1GigE speed.
h2. INTERNET NETWORKING
In order to access to the Internet you should connect your local network segment
to the Internet router.
The simplest way is to use the separate router with NAT.
The more advanced way is to construct the router/firewall on the cluster's virtual
instance, probably with separate network interface.
h2. DEBIAN REPOSITORIES
The service instance provides "approx" apt proxy, which can be uplinked to
regular Debian mirrors or to the intermediate mirrors or apt proxies in your LAN.
In any case, the service instance's "approx" is loaded by the copy of the
repository from the SCI-CD CD-ROM. It can be used even in absence of the uplink
to the external sources.
The standard reposiporise list is pushed automatically into the sources.lists
of the puppet-client instances (which are by default all the nodes and
instances).
h2. INSTALLATION
Please read [[INSTALL]] for instructions.
{{toc}}
[[OVERVIEW]] | [[BUILD-ISO]] | [[INSTALL]] | [[SETUP]] | [[OPERATIONS]] | [[GITMAGIC]] | [[LICENSE]] | [[STATUS]]
[[ОБЗОР]] | [[СБОРКА-ISO]] | [[УСТАНОВКА]] | [[НАСТРОЙКА]] | [[ОПЕРАЦИИ]] | [[МАГИЯ GIT]] | [[ЛИЦЕНЗИЯ]] | [[СОСТОЯНИЕ]]
h2. THE PROJECT
"SkyCover Infrastructure" (*SCI*) is the multipurpose, mostly automated high
reliability virtual server infrastructure, equipped with automatic monitoring,
backup and audit processes.
"SCI-CD" is the deployment engine for cheap and flexible virtual high
reliability cluster, the ground level for *SCI*.
h2. INTRODUCTION
SCI-CD is created to make easy the deployment of the virtual infrastructure,
based on the Ganeti project (http://code.google.com/p/ganeti/).
With Ganeti you will be able to install the "farm" of several computers -
"nodes", each with it's own storage and to create the cluster environment over
them when each virtual instance will be started on the one node and backed up
on the another node with on-line disk synchronization (via DRBD) and fast
failover.
!cluster-sync.jpg!
This provides the way to made cheap redundant high reliable systems.
Ganeti is also able to manage "regular", stand-alone virtual instances.
h2. SOFTWARE
SCI-CD is based on the Debian/GNU project and uses it's installer (simple-cdd)
and also it's package repositories.
Installing SCI-CD you'll not got a new thing - you'll got a regular Debian/GNU
Linux system, just by the easy way and tuned as complex cluster platform.
SCI-CD contains the minimum of original software, instead it is focused on the
proper usage of existing open source components.
h2. CONTENTS
SCI core platform consists of two Ganeti nodes and one *service instance* (virtual
machine, named *sci*).
The service instance provides:
* DNS for local zones, forwarders, hint (BIND9 chrooted),
* DHCP server for LAN and with DDNS (disabled by default),
* Configuation management, able to tune up new nodes and instances (Puppet),
* Apt-proxy, with uplink to Debian mirrors and also with local repository,
copied from SCI-CD CD-ROM (Approx).
More virtual instances may be created and more functions may be assigned to them
using the regular Ganeti commands.
h2. HARDWARE
Minimal SCI core hardware is based on the two nodes which can be any two computers -
the major brands, such as HP, IBM, the "China brands", such as
SuperMicro or self-maiden servers or even workstations.
Of course, the performance of your system will be very dependent on the
hardware you have chosed - the CPU, memory, RAID subsystem - all will matters
as usual for the performance, but mostly - NOT for the high reliability
capabilities.
The high reliability features are supported by providing two separate nodes with
their own storage subsystems and on-line storage synchronization between them. In the
most cases this level of redundancy is sufficient to cover the data losses and
service interrupts even with the cheap hardware.
h2. LOCAL AREA NETWORKING
All nodes should be attached to the same TCP/IP LAN segment (or segments)nto be able
to launch the clustered virtual linstances.
The SCI setup supports 802.1q VLAN tagging, so you can easy give your nodes and
instances the interfaces in different network segments (the network switch with
802.1q support is required).
h2. STORAGE NETWORKING
The nodes need to be interconnected with the fast TCP/IP interlink to transfer
storage data (DRBD synchronization). 1GigE will be sufficient usually.
It is possible (and recommended) to separate the storage interlink from the main
TCP/IP access interfaces. In the other words, each node should have a two network
adapters and at least one of them should support 1GigE.
For only two nodes the storage network may be implemented with the cat5e patch cord
without an Ethernet switch.
For the simple and demo tasks there is possible to use only one network adapter
on the node, but it *MUST* support 1GigE speed.
h2. INTERNET NETWORKING
In order to access to the Internet you should connect your local network segment
to the Internet router.
The simplest way is to use the separate router with NAT.
The more advanced way is to construct the router/firewall on the cluster's virtual
instance, probably with separate network interface.
h2. DEBIAN REPOSITORIES
The service instance provides "approx" apt proxy, which can be uplinked to
regular Debian mirrors or to the intermediate mirrors or apt proxies in your LAN.
In any case, the service instance's "approx" is loaded by the copy of the
repository from the SCI-CD CD-ROM. It can be used even in absence of the uplink
to the external sources.
The standard reposiporise list is pushed automatically into the sources.lists
of the puppet-client instances (which are by default all the nodes and
instances).
h2. INSTALLATION
Please read [[INSTALL]] for instructions.