Проект

Общее

Профиль

SETUP » История » Версия 2

Владимир Ипатов, 23.10.2012 16:05

1 1 Dmitry Chernyak
h1. SETUP
2 1 Dmitry Chernyak
3 1 Dmitry Chernyak
{{toc}}
4 1 Dmitry Chernyak
5 1 Dmitry Chernyak
Ensure both nodes are up.
6 1 Dmitry Chernyak
7 1 Dmitry Chernyak
If you planning to use the secondary network for SAN and DRBD synchronization, you
8 1 Dmitry Chernyak
should configure secondary IP interfaces manually on both nodes at this time.
9 1 Dmitry Chernyak
10 1 Dmitry Chernyak
Log in to the first node via ssh. Due to lack of DNS there may be
11 1 Dmitry Chernyak
a minute timeout before password prompt.
12 1 Dmitry Chernyak
13 2 Владимир Ипатов
h2. NETWORK CONFIGURATION
14 2 Владимир Ипатов
15 2 Владимир Ипатов
Network configuration may be highly various.
16 2 Владимир Ипатов
Here we describe several schemas.
17 2 Владимир Ипатов
18 2 Владимир Ипатов
19 2 Владимир Ипатов
20 1 Dmitry Chernyak
h2. DEFINING ENVIRONMENT
21 1 Dmitry Chernyak
22 1 Dmitry Chernyak
Edit @/etc/sci/sci.conf@
23 1 Dmitry Chernyak
24 1 Dmitry Chernyak
* You should specify node1 and node2 data as you have installed them.
25 1 Dmitry Chernyak
*NOTE*: You can setup the cluster even with one node. In this case just leave NODE2_
26 1 Dmitry Chernyak
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
27 1 Dmitry Chernyak
the procedures.
28 1 Dmitry Chernyak
29 1 Dmitry Chernyak
* You should specify the cluster's name and IP.
30 1 Dmitry Chernyak
31 1 Dmitry Chernyak
* NODE#_SAN_IP should be specified on both nodes or none.
32 1 Dmitry Chernyak
33 1 Dmitry Chernyak
* If you haven't Internet uplink or have a local package mirrors, you should correct
34 1 Dmitry Chernyak
APT_ - settings.
35 1 Dmitry Chernyak
36 1 Dmitry Chernyak
* If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
37 1 Dmitry Chernyak
(note trailing ';').
38 1 Dmitry Chernyak
39 1 Dmitry Chernyak
h2. SETUP CLUSTER
40 1 Dmitry Chernyak
41 1 Dmitry Chernyak
Issue:
42 1 Dmitry Chernyak
43 1 Dmitry Chernyak
<pre>
44 1 Dmitry Chernyak
# sci-setup cluster
45 1 Dmitry Chernyak
</pre>
46 1 Dmitry Chernyak
47 1 Dmitry Chernyak
Check and confirm settings printed.
48 1 Dmitry Chernyak
49 1 Dmitry Chernyak
The process will go on.
50 1 Dmitry Chernyak
51 1 Dmitry Chernyak
Next you will be prompted to accept ssh key from node2 and for the root's password to node2.
52 1 Dmitry Chernyak
53 1 Dmitry Chernyak
On finish you will look something like this:
54 1 Dmitry Chernyak
55 1 Dmitry Chernyak
<pre>
56 1 Dmitry Chernyak
Verify
57 1 Dmitry Chernyak
Wed Jan 12 15:36:10 2011 * Verifying global settings
58 1 Dmitry Chernyak
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
59 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying node status
60 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying instance status
61 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
62 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
63 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
64 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Other Notes
65 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Hooks Results
66 1 Dmitry Chernyak
Node                    DTotal  DFree MTotal MNode MFree Pinst Sinst
67 1 Dmitry Chernyak
gnt1.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
68 1 Dmitry Chernyak
gnt2.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
69 1 Dmitry Chernyak
If all is ok, proceed with /usr/local/sbin/sci-setup service
70 1 Dmitry Chernyak
</pre>
71 1 Dmitry Chernyak
72 1 Dmitry Chernyak
h2. SETUP SERVICE INSTANCE
73 1 Dmitry Chernyak
74 1 Dmitry Chernyak
The service instance is named 'sci' and have a few aliases.
75 1 Dmitry Chernyak
On setup, it's IP address is determined from @/etc/resolv.conf@ of your first node.
76 1 Dmitry Chernyak
This instance will be hardcoded in @/etc/hosts@ file of all cluster nodes and instances.
77 1 Dmitry Chernyak
78 1 Dmitry Chernyak
Issue:
79 1 Dmitry Chernyak
80 1 Dmitry Chernyak
<pre>
81 1 Dmitry Chernyak
# sci-setup service
82 1 Dmitry Chernyak
</pre>
83 1 Dmitry Chernyak
84 1 Dmitry Chernyak
You'll see the progress of DRBD syncing disks, then the message
85 1 Dmitry Chernyak
<pre>
86 1 Dmitry Chernyak
* running the instance OS create scripts...
87 1 Dmitry Chernyak
</pre>
88 1 Dmitry Chernyak
appears. The further may take a while. The process finishes with
89 1 Dmitry Chernyak
<pre>
90 1 Dmitry Chernyak
* starting instance...
91 1 Dmitry Chernyak
</pre>
92 1 Dmitry Chernyak
message.
93 1 Dmitry Chernyak
94 1 Dmitry Chernyak
Now you can log on to the sci instance using:
95 1 Dmitry Chernyak
96 1 Dmitry Chernyak
<pre>
97 1 Dmitry Chernyak
# gnt-instance console sci
98 1 Dmitry Chernyak
</pre>
99 1 Dmitry Chernyak
100 1 Dmitry Chernyak
Log in as root, the password is empty.
101 1 Dmitry Chernyak
*NOTE*: Due to empty password all remote connections to new instance is prohibited.
102 1 Dmitry Chernyak
You should change password and install @openssh-server@ package manually after
103 1 Dmitry Chernyak
successful bootstrap procedure.
104 1 Dmitry Chernyak
105 1 Dmitry Chernyak
h2. SERVICE INSTANCE BOOTSTRAP
106 1 Dmitry Chernyak
107 1 Dmitry Chernyak
The system will setup itself via puppet. This is the iterative process. You can monitor
108 1 Dmitry Chernyak
it by looking into @/var/log/daemon.log@. At start there is no @less@ command yet, so
109 1 Dmitry Chernyak
you can use @more@, @cat@, @tail@ or @tail -f@ until @less@ will be auto-installed.
110 1 Dmitry Chernyak
111 1 Dmitry Chernyak
By default the iterations are repeated in 20 minutes. To shorten the wait time you can
112 1 Dmitry Chernyak
issue
113 1 Dmitry Chernyak
114 1 Dmitry Chernyak
<pre>
115 1 Dmitry Chernyak
# /etc/init.d/puppet restart
116 1 Dmitry Chernyak
</pre>
117 1 Dmitry Chernyak
118 1 Dmitry Chernyak
and then look into @daemon.log@ how it finishes.
119 1 Dmitry Chernyak
120 1 Dmitry Chernyak
Repeat this a few times until puppet will do nothing in turn.
121 1 Dmitry Chernyak
122 1 Dmitry Chernyak
h2. PREPARING FOR NEW INSTANCES
123 1 Dmitry Chernyak
124 1 Dmitry Chernyak
New instances are created just by regular Ganeti commands such as:
125 1 Dmitry Chernyak
126 1 Dmitry Chernyak
<pre>
127 1 Dmitry Chernyak
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
128 1 Dmitry Chernyak
</pre>
129 1 Dmitry Chernyak
130 1 Dmitry Chernyak
Altought, some tuning hooks are provided by SCI-CD project:
131 1 Dmitry Chernyak
# Each instance has installed @puppet@ for autoconfiguration and @openssh-client@ for file transfers etc.
132 1 Dmitry Chernyak
# The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
133 1 Dmitry Chernyak
# The instance's network interfaces may be set up automatically as described below.
134 1 Dmitry Chernyak
135 1 Dmitry Chernyak
h3. INSTANCE INTERFACE AUTOCONFIGURATION
136 1 Dmitry Chernyak
137 1 Dmitry Chernyak
If your instances may sit on several networks and you need static addressing in them, you should fulfill
138 1 Dmitry Chernyak
the file @/etc/ganeti/networks@ with all known networks you want to attach your instances.
139 1 Dmitry Chernyak
Each line in the file has format
140 1 Dmitry Chernyak
141 1 Dmitry Chernyak
|NETWORK|NETMASK|BROADCAST|GATEWAY|
142 1 Dmitry Chernyak
143 1 Dmitry Chernyak
Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
144 1 Dmitry Chernyak
instance and fulfill it's @/etc/network/interfaces@ accordingly.
145 1 Dmitry Chernyak
146 1 Dmitry Chernyak
*NOTE*: If you have only one default network, you shouldn't care because it's data are preinstalled.
147 1 Dmitry Chernyak
*NOTE*: networks file must be copied to all cluster nodes (not automated yet).
148 1 Dmitry Chernyak
149 1 Dmitry Chernyak
h2. SCI OPERATIONS
150 1 Dmitry Chernyak
151 1 Dmitry Chernyak
Read [[OPERATIONS]] next.