Проект

Общее

Профиль

SETUP » История » Версия 13

Владимир Ипатов, 09.11.2012 05:25

1 1 Dmitry Chernyak
h1. SETUP
2 1 Dmitry Chernyak
3 1 Dmitry Chernyak
{{toc}}
4 1 Dmitry Chernyak
5 1 Dmitry Chernyak
Ensure both nodes are up.
6 1 Dmitry Chernyak
7 1 Dmitry Chernyak
If you planning to use the secondary network for SAN and DRBD synchronization, you
8 1 Dmitry Chernyak
should configure secondary IP interfaces manually on both nodes at this time.
9 1 Dmitry Chernyak
10 11 Dmitry Chernyak
Log in to the first node via ssh. Due to lack of DNS there may be a minute timeout
11 11 Dmitry Chernyak
before the server anwers yoy with the password prompt.
12 1 Dmitry Chernyak
13 2 Владимир Ипатов
h2. NETWORK CONFIGURATION
14 2 Владимир Ипатов
15 2 Владимир Ипатов
Network configuration may be highly various.
16 11 Dmitry Chernyak
Here we describe several usual schemas.
17 2 Владимир Ипатов
18 7 Владимир Ипатов
h3. Basic schema - one ethernet to all.
19 2 Владимир Ипатов
20 11 Dmitry Chernyak
One ethernet, one subnet, internet connection provided by external (not in claster) router.
21 1 Dmitry Chernyak
By default installer create bridge named xen-br0. You can customize parameters by editing /etc/network/interfaces.
22 7 Владимир Ипатов
In this case you must have nodes connected to gigabit ethernet switch.
23 3 Владимир Ипатов
By default it looks like:
24 3 Владимир Ипатов
<pre>
25 3 Владимир Ипатов
auto xen-br0
26 3 Владимир Ипатов
iface xen-br0 inet static
27 3 Владимир Ипатов
        address 192.168.5.88
28 3 Владимир Ипатов
        netmask 255.255.255.0
29 3 Владимир Ипатов
        network 192.168.5.0
30 3 Владимир Ипатов
        broadcast 192.168.5.255
31 3 Владимир Ипатов
        gateway 192.168.5.1
32 3 Владимир Ипатов
        bridge_ports eth0
33 3 Владимир Ипатов
        bridge_stp off
34 3 Владимир Ипатов
        bridge_fd 0
35 1 Dmitry Chernyak
#       up ifconfig eth0 mtu 9000
36 1 Dmitry Chernyak
#       up ifconfig xen-br0 mtu 9000
37 3 Владимир Ипатов
</pre>
38 1 Dmitry Chernyak
Important parameters besides ipv4 settings is:
39 11 Dmitry Chernyak
<pre>
40 11 Dmitry Chernyak
bridge_ports eth0
41 11 Dmitry Chernyak
</pre>
42 1 Dmitry Chernyak
43 11 Dmitry Chernyak
- means that physical interface eth0 enslaved to this bridge.
44 11 Dmitry Chernyak
45 11 Dmitry Chernyak
<pre>
46 1 Dmitry Chernyak
up ifconfig eth0 mtu 9000
47 11 Dmitry Chernyak
up ifconfig xen-br0 mtu 9000
48 11 Dmitry Chernyak
</pre>
49 11 Dmitry Chernyak
50 11 Dmitry Chernyak
- setting jumbo frame on bridge for more network speed and less cpu utilization.
51 3 Владимир Ипатов
It will be actual on interface where drbd link will be.
52 3 Владимир Ипатов
However, setting mtu higher than 1500 will cause problems with any network equipment that
53 1 Dmitry Chernyak
doesn't support jumbo frames. That's the reason because it option commented out by default.
54 3 Владимир Ипатов
55 7 Владимир Ипатов
h3. Default schema - two ethernets, one for interlink(ganeti interoperation+drbd link) and one for lan.
56 1 Dmitry Chernyak
57 11 Dmitry Chernyak
This schema suits most cases. It doesn't required a gigabit switch, provide good performance and reliability.
58 11 Dmitry Chernyak
Two gigabit network interfaces on the nodes are connected directly or via a gigabit switch (if you want more than two nodes in the cluster).
59 7 Владимир Ипатов
Other interfaces connected to lan. Routing, firewalling, dhcp, dns in lan performed by external router or server.
60 7 Владимир Ипатов
Lan failure doesn't affect cluster in this setup.
61 8 Владимир Ипатов
This is /etc/network/interfaces file for this setup:
62 8 Владимир Ипатов
<pre>auto xen-br0
63 8 Владимир Ипатов
iface xen-br0 inet static
64 8 Владимир Ипатов
	address 192.168.236.1
65 8 Владимир Ипатов
	netmask 255.255.255.0
66 8 Владимир Ипатов
	network 192.168.236.0
67 8 Владимир Ипатов
	broadcast 192.168.236.255
68 8 Владимир Ипатов
        bridge_ports eth0
69 1 Dmitry Chernyak
        bridge_stp off
70 1 Dmitry Chernyak
        bridge_fd 0
71 1 Dmitry Chernyak
#	up ifconfig eth0 mtu 9000
72 1 Dmitry Chernyak
#	up ifconfig xen-br0 mtu 9000
73 8 Владимир Ипатов
74 7 Владимир Ипатов
auto xen-lan
75 8 Владимир Ипатов
iface xen-lan inet static
76 8 Владимир Ипатов
	address 192.168.5.55
77 8 Владимир Ипатов
	netmask 255.255.255.0
78 8 Владимир Ипатов
	network 192.168.5.0
79 1 Dmitry Chernyak
	broadcast 192.168.5.255
80 12 Владимир Ипатов
	gateway 192.168.5.1
81 8 Владимир Ипатов
	bridge_ports eth1
82 8 Владимир Ипатов
	bridge_stp off
83 8 Владимир Ипатов
	bridge_fd 0
84 8 Владимир Ипатов
</pre>
85 11 Dmitry Chernyak
86 11 Dmitry Chernyak
xen-br0 used by ganeti interoperation and drbd link, it was configured by the installer.
87 11 Dmitry Chernyak
Also the dns server and the gateway was configured by the installer - it will be our service instance(sci) address.
88 11 Dmitry Chernyak
xen-lan used by lan connection, its configuration must be added by hands.
89 8 Владимир Ипатов
In this network configuration you must fill these variables in sci.conf:
90 9 Владимир Ипатов
NODE1_IP - already configured by installer.
91 9 Владимир Ипатов
NODE1_NAME - already configured by installer.
92 9 Владимир Ипатов
NODE2_IP - set interlink ip address of second node. e.g. 192.168.236.2
93 9 Владимир Ипатов
NODE2_NAME - set second node name. e.g. gnt2
94 9 Владимир Ипатов
NODE1_LAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-lan. 192.168.5.55
95 9 Владимир Ипатов
NODE2_LAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-lan. e.g. 192.168.5.58
96 9 Владимир Ипатов
CLUSTER_IP - cluster address in lan. Must not match any exist host address in lan. 192.168.5.35
97 9 Владимир Ипатов
CLUSTER_NAME - cluster name in lan. In will be available by dns name $CLUSTER_NAME.
98 7 Владимир Ипатов
99 7 Владимир Ипатов
h3. Mupltiple bridges with routing, firewalling and wan access.
100 7 Владимир Ипатов
101 3 Владимир Ипатов
Here is a bit more complicated network setup.
102 3 Владимир Ипатов
In this setup we have, for example, two private netwokrs and wan by ethernet. All routing and firewalling
103 7 Владимир Ипатов
performed by separate firewall instance in our cluster. This setup fit when you don't have expensive hardware routers and firewalls.
104 3 Владимир Ипатов
This is /etc/network/interfaces file in this setup:
105 3 Владимир Ипатов
<pre>
106 5 Владимир Ипатов
auto lan
107 5 Владимир Ипатов
iface lan inet static
108 5 Владимир Ипатов
	address 192.168.21.10
109 5 Владимир Ипатов
	netmask 255.255.255.0
110 1 Dmitry Chernyak
        bridge_ports eth0
111 1 Dmitry Chernyak
        bridge_stp off
112 1 Dmitry Chernyak
        bridge_fd 0
113 5 Владимир Ипатов
114 12 Владимир Ипатов
auto server
115 12 Владимир Ипатов
iface server inet static
116 5 Владимир Ипатов
	address 192.168.20.10
117 5 Владимир Ипатов
	netmask 255.255.255.0
118 5 Владимир Ипатов
	gateway 192.168.20.1
119 5 Владимир Ипатов
        bridge_ports eth1
120 5 Владимир Ипатов
        bridge_stp off
121 5 Владимир Ипатов
        bridge_fd 0
122 5 Владимир Ипатов
        up ifconfig eth1 mtu 9000
123 5 Владимир Ипатов
        up ifconfig dmz mtu 9000
124 5 Владимир Ипатов
125 5 Владимир Ипатов
auto wan1
126 5 Владимир Ипатов
iface wan1 inet manual
127 5 Владимир Ипатов
        bridge_ports eth2
128 5 Владимир Ипатов
        bridge_stp off
129 5 Владимир Ипатов
        bridge_fd 0
130 1 Dmitry Chernyak
</pre>
131 12 Владимир Ипатов
In this example we have separate lan interfaces, server interface(in this case servers separated from lan and
132 12 Владимир Ипатов
clients go to servers thru router) and wan interface. server interface - ganeti interoperation dev and drbd link
133 5 Владимир Ипатов
interfase, so there is mtu 9000.
134 6 Владимир Ипатов
Also in this example you must edit MASTER_NETDEV and LINK_NETDEV in /etc/sci/sci.conf from default xen-br0 to dmz.
135 5 Владимир Ипатов
There is no address in wan for hypervisor, although we recommend you to get subnet from
136 5 Владимир Ипатов
your ISP in order to assign IP addresses to nodes to management it even if router instance
137 5 Владимир Ипатов
is down.
138 5 Владимир Ипатов
139 5 Владимир Ипатов
Here is an example /etc/network/interfaces in router instance:
140 5 Владимир Ипатов
<pre>
141 5 Владимир Ипатов
auto eth0
142 5 Владимир Ипатов
iface eth0 inet static
143 5 Владимир Ипатов
   address 192.168.20.1
144 5 Владимир Ипатов
   netmask 255.255.255.0
145 5 Владимир Ипатов
146 5 Владимир Ипатов
auto eth1
147 5 Владимир Ипатов
iface eth1 inet static
148 5 Владимир Ипатов
   address 192.168.21.1
149 5 Владимир Ипатов
   netmask 255.255.255.0
150 5 Владимир Ипатов
151 5 Владимир Ипатов
auto eth2
152 5 Владимир Ипатов
iface eth2 inet static
153 5 Владимир Ипатов
   address 1.1.1.2
154 5 Владимир Ипатов
   netmask 255.255.255.0
155 1 Dmitry Chernyak
   address 1.1.1.1
156 1 Dmitry Chernyak
</pre>
157 1 Dmitry Chernyak
Where eth0 linked to bridge dmz, eth1 linked to lan, eth2 linked to wan.
158 7 Владимир Ипатов
159 7 Владимир Ипатов
h3. Datacenter schema - separate interfaces for lan, ganeti interoperation, drbd link.
160 10 Владимир Ипатов
161 7 Владимир Ипатов
If you have powerful networking infrastructure
162 13 Владимир Ипатов
Here we have separate interfaces for ganeti interoperation(in this case it may be named management interface)
163 13 Владимир Ипатов
<pre>auto mgmt
164 13 Владимир Ипатов
iface mgmt inet static
165 13 Владимир Ипатов
    address 192.168.236.1
166 13 Владимир Ипатов
    netmask 255.255.255.0
167 13 Владимир Ипатов
    network 192.168.236.0
168 13 Владимир Ипатов
    gateway 192.168.236.1
169 13 Владимир Ипатов
    broadcast 192.168.236.255
170 13 Владимир Ипатов
        bridge_ports eth0
171 13 Владимир Ипатов
        bridge_stp off
172 13 Владимир Ипатов
        bridge_fd 0
173 13 Владимир Ипатов
174 13 Владимир Ипатов
auto xen-san
175 13 Владимир Ипатов
iface xen-san inet static
176 13 Владимир Ипатов
    address 192.168.237.1
177 13 Владимир Ипатов
    netmask 255.255.255.0
178 13 Владимир Ипатов
    network 192.168.237.0
179 13 Владимир Ипатов
    broadcast 192.168.237.255
180 13 Владимир Ипатов
    bridge_ports eth1
181 13 Владимир Ипатов
    bridge_stp off
182 13 Владимир Ипатов
    bridge_fd 0
183 13 Владимир Ипатов
    up ifconfig eth1 mtu 9000
184 13 Владимир Ипатов
    up ifconfig xen-san mtu 9000
185 13 Владимир Ипатов
186 13 Владимир Ипатов
auto xen-lan
187 13 Владимир Ипатов
iface xen-lan inet manual
188 13 Владимир Ипатов
    bridge_ports eth2
189 13 Владимир Ипатов
    bridge_stp off
190 13 Владимир Ипатов
    bridge_fd 0
191 13 Владимир Ипатов
</pre>
192 13 Владимир Ипатов
193 13 Владимир Ипатов
In this example nodes don't have addresses in lan.
194 13 Владимир Ипатов
You must fill these vars in sci.conf to create cluster fits this network config:
195 13 Владимир Ипатов
NODE1_IP - already configured by installer.
196 13 Владимир Ипатов
NODE1_NAME - already configured by installer.
197 13 Владимир Ипатов
NODE2_IP - set interlink ip address of second node. e.g. 192.168.236.2
198 13 Владимир Ипатов
NODE2_NAME - set second node name. e.g. gnt2
199 13 Владимир Ипатов
NODE1_SAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-san. 192.168.237.1
200 13 Владимир Ипатов
NODE2_SAN_IP - lan ip for first node. It will be available by dns name $NODE1_NAME-san. e.g. 192.168.237.2
201 13 Владимир Ипатов
CLUSTER_IP - cluster address in lan. Must not match any exist host address in lan. 192.168.236.35
202 13 Владимир Ипатов
CLUSTER_NAME - cluster name in lan. In will be available by dns name $CLUSTER_NAME.
203 13 Владимир Ипатов
SCI_LAN_IP - if you want presence sci intance in your lan, assign ip. e.g. 192.168.35.5
204 13 Владимир Ипатов
SCI_LAN_NETMASK - your nodes don't have addresses in lan, so you must enter netmask for this segment by hand. e.g. 255.255.255.0
205 13 Владимир Ипатов
SCI_LAN_GATEWAY - your nodes don't have addresses in lan, so you must enter gateway for this segment by hand. e.g. 192.168.35.1
206 13 Владимир Ипатов
Of course, it is easy to use VLANS in datacenter conditions. Next example will explain how. However, remember it is recommended
207 13 Владимир Ипатов
that drbd link must be on separate ethernet.
208 13 Владимир Ипатов
209 5 Владимир Ипатов
210 5 Владимир Ипатов
h3. VLAN schema
211 5 Владимир Ипатов
212 5 Владимир Ипатов
If you have managed switches, you can set networking with VLANs.
213 5 Владимир Ипатов
You should add something like this for each VLAN:
214 5 Владимир Ипатов
<pre>
215 5 Владимир Ипатов
auto eth0.55
216 5 Владимир Ипатов
iface eth0.55 inet manual
217 5 Владимир Ипатов
        up ifconfig eth0.55 up
218 5 Владимир Ипатов
219 5 Владимир Ипатов
auto bridge-example-vlan
220 5 Владимир Ипатов
iface bridge-example-vlan inet manual
221 5 Владимир Ипатов
        up brctl addbr bridge-example-vlan
222 5 Владимир Ипатов
        up brctl addif bridge-example-vlan eth0.55
223 5 Владимир Ипатов
        up brctl stp bridge-example-vlan off
224 5 Владимир Ипатов
        up ifconfig bridge-example-vlan up
225 5 Владимир Ипатов
        down ifconfig bridge-example-vlan down
226 5 Владимир Ипатов
        down brctl delbr bridge-example-vlan
227 5 Владимир Ипатов
</pre>
228 5 Владимир Ипатов
Where 55 - VLAN number.
229 5 Владимир Ипатов
In this example node don't have an ip address in this VLAN, although you could
230 5 Владимир Ипатов
assign an ip to bridge just like standard bridge.
231 5 Владимир Ипатов
232 5 Владимир Ипатов
Alternative schema is:
233 5 Владимир Ипатов
<pre>
234 5 Владимир Ипатов
auto vlan55
235 5 Владимир Ипатов
iface vlan55 inet manual
236 5 Владимир Ипатов
   vlan_raw_device eth0
237 5 Владимир Ипатов
238 5 Владимир Ипатов
auto bridge-example-vlan
239 5 Владимир Ипатов
iface bridge-example-vlan inet manual
240 5 Владимир Ипатов
           bridge_ports vlan55
241 5 Владимир Ипатов
        bridge_stp off
242 5 Владимир Ипатов
        bridge_fd 0
243 2 Владимир Ипатов
</pre>
244 1 Dmitry Chernyak
It do the same, but in another way.
245 1 Dmitry Chernyak
246 1 Dmitry Chernyak
h2. DEFINING ENVIRONMENT
247 1 Dmitry Chernyak
248 1 Dmitry Chernyak
Edit @/etc/sci/sci.conf@
249 1 Dmitry Chernyak
250 8 Владимир Ипатов
Most of values rely of your network setup. In section network setup it was described for most cases.
251 8 Владимир Ипатов
252 8 Владимир Ипатов
Here is additional notes about sci.conf configuring:
253 8 Владимир Ипатов
254 1 Dmitry Chernyak
* You should specify node1 and node2 data as you have installed them.
255 1 Dmitry Chernyak
*NOTE*: You can setup the cluster even with one node. In this case just leave NODE2_
256 1 Dmitry Chernyak
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
257 1 Dmitry Chernyak
the procedures.
258 1 Dmitry Chernyak
259 1 Dmitry Chernyak
* You should specify the cluster's name and IP.
260 1 Dmitry Chernyak
261 1 Dmitry Chernyak
* NODE#_SAN_IP should be specified on both nodes or none.
262 1 Dmitry Chernyak
263 8 Владимир Ипатов
* NODE#_LAN_IP should be specified on both nodes or none.
264 8 Владимир Ипатов
265 1 Dmitry Chernyak
* If you haven't Internet uplink or have a local package mirrors, you should correct
266 1 Dmitry Chernyak
APT_ - settings.
267 1 Dmitry Chernyak
268 6 Владимир Ипатов
* If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
269 6 Владимир Ипатов
(note trailing ';').
270 1 Dmitry Chernyak
271 8 Владимир Ипатов
* MASTER_NETDEV - master interface name for cluster address. Auto-detected by default.
272 6 Владимир Ипатов
273 8 Владимир Ипатов
* LAN_NETDEV - Network interface to bind to virtual machies by default. Auto-detected by default.
274 6 Владимир Ипатов
275 6 Владимир Ипатов
* RESERVED_VOLS - list of volumes ignored by ganeti. Comma separated. You must specify vg for all volumes in this list.
276 6 Владимир Ипатов
277 1 Dmitry Chernyak
278 1 Dmitry Chernyak
h2. SETUP CLUSTER
279 1 Dmitry Chernyak
280 1 Dmitry Chernyak
Issue:
281 1 Dmitry Chernyak
282 1 Dmitry Chernyak
<pre>
283 1 Dmitry Chernyak
# sci-setup cluster
284 1 Dmitry Chernyak
</pre>
285 1 Dmitry Chernyak
286 1 Dmitry Chernyak
Check and confirm settings printed.
287 1 Dmitry Chernyak
288 1 Dmitry Chernyak
The process will go on.
289 1 Dmitry Chernyak
290 1 Dmitry Chernyak
Next you will be prompted to accept ssh key from node2 and for the root's password to node2.
291 1 Dmitry Chernyak
292 1 Dmitry Chernyak
On finish you will look something like this:
293 1 Dmitry Chernyak
294 1 Dmitry Chernyak
<pre>
295 1 Dmitry Chernyak
Verify
296 1 Dmitry Chernyak
Wed Jan 12 15:36:10 2011 * Verifying global settings
297 1 Dmitry Chernyak
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
298 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying node status
299 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying instance status
300 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
301 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
302 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
303 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Other Notes
304 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Hooks Results
305 1 Dmitry Chernyak
Node                    DTotal  DFree MTotal MNode MFree Pinst Sinst
306 1 Dmitry Chernyak
gnt1.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
307 1 Dmitry Chernyak
gnt2.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
308 1 Dmitry Chernyak
If all is ok, proceed with /usr/local/sbin/sci-setup service
309 1 Dmitry Chernyak
</pre>
310 1 Dmitry Chernyak
311 1 Dmitry Chernyak
h2. SETUP SERVICE INSTANCE
312 1 Dmitry Chernyak
313 1 Dmitry Chernyak
The service instance is named 'sci' and have a few aliases.
314 1 Dmitry Chernyak
On setup, it's IP address is determined from @/etc/resolv.conf@ of your first node.
315 1 Dmitry Chernyak
This instance will be hardcoded in @/etc/hosts@ file of all cluster nodes and instances.
316 1 Dmitry Chernyak
317 1 Dmitry Chernyak
Issue:
318 1 Dmitry Chernyak
319 1 Dmitry Chernyak
<pre>
320 1 Dmitry Chernyak
# sci-setup service
321 1 Dmitry Chernyak
</pre>
322 1 Dmitry Chernyak
323 1 Dmitry Chernyak
You'll see the progress of DRBD syncing disks, then the message
324 1 Dmitry Chernyak
<pre>
325 1 Dmitry Chernyak
* running the instance OS create scripts...
326 1 Dmitry Chernyak
</pre>
327 1 Dmitry Chernyak
appears. The further may take a while. The process finishes with
328 1 Dmitry Chernyak
<pre>
329 1 Dmitry Chernyak
* starting instance...
330 1 Dmitry Chernyak
</pre>
331 1 Dmitry Chernyak
message.
332 1 Dmitry Chernyak
333 1 Dmitry Chernyak
Now you can log on to the sci instance using:
334 1 Dmitry Chernyak
335 1 Dmitry Chernyak
<pre>
336 1 Dmitry Chernyak
# gnt-instance console sci
337 1 Dmitry Chernyak
</pre>
338 1 Dmitry Chernyak
339 1 Dmitry Chernyak
Log in as root, the password is empty.
340 1 Dmitry Chernyak
*NOTE*: Due to empty password all remote connections to new instance is prohibited.
341 1 Dmitry Chernyak
You should change password and install @openssh-server@ package manually after
342 1 Dmitry Chernyak
successful bootstrap procedure.
343 1 Dmitry Chernyak
344 1 Dmitry Chernyak
h2. SERVICE INSTANCE BOOTSTRAP
345 1 Dmitry Chernyak
346 1 Dmitry Chernyak
The system will setup itself via puppet. This is the iterative process. You can monitor
347 1 Dmitry Chernyak
it by looking into @/var/log/daemon.log@. At start there is no @less@ command yet, so
348 1 Dmitry Chernyak
you can use @more@, @cat@, @tail@ or @tail -f@ until @less@ will be auto-installed.
349 1 Dmitry Chernyak
350 1 Dmitry Chernyak
By default the iterations are repeated in 20 minutes. To shorten the wait time you can
351 1 Dmitry Chernyak
issue
352 1 Dmitry Chernyak
353 1 Dmitry Chernyak
<pre>
354 1 Dmitry Chernyak
# /etc/init.d/puppet restart
355 1 Dmitry Chernyak
</pre>
356 1 Dmitry Chernyak
357 1 Dmitry Chernyak
and then look into @daemon.log@ how it finishes.
358 1 Dmitry Chernyak
359 1 Dmitry Chernyak
Repeat this a few times until puppet will do nothing in turn.
360 1 Dmitry Chernyak
361 1 Dmitry Chernyak
h2. PREPARING FOR NEW INSTANCES
362 1 Dmitry Chernyak
363 1 Dmitry Chernyak
New instances are created just by regular Ganeti commands such as:
364 1 Dmitry Chernyak
365 1 Dmitry Chernyak
<pre>
366 1 Dmitry Chernyak
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
367 1 Dmitry Chernyak
</pre>
368 1 Dmitry Chernyak
369 1 Dmitry Chernyak
Altought, some tuning hooks are provided by SCI-CD project:
370 1 Dmitry Chernyak
# Each instance has installed @puppet@ for autoconfiguration and @openssh-client@ for file transfers etc.
371 1 Dmitry Chernyak
# The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
372 1 Dmitry Chernyak
# The instance's network interfaces may be set up automatically as described below.
373 1 Dmitry Chernyak
374 1 Dmitry Chernyak
h3. INSTANCE INTERFACE AUTOCONFIGURATION
375 1 Dmitry Chernyak
376 1 Dmitry Chernyak
If your instances may sit on several networks and you need static addressing in them, you should fulfill
377 1 Dmitry Chernyak
the file @/etc/ganeti/networks@ with all known networks you want to attach your instances.
378 1 Dmitry Chernyak
Each line in the file has format
379 1 Dmitry Chernyak
380 1 Dmitry Chernyak
|NETWORK|NETMASK|BROADCAST|GATEWAY|
381 1 Dmitry Chernyak
382 1 Dmitry Chernyak
Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
383 1 Dmitry Chernyak
instance and fulfill it's @/etc/network/interfaces@ accordingly.
384 1 Dmitry Chernyak
385 1 Dmitry Chernyak
*NOTE*: If you have only one default network, you shouldn't care because it's data are preinstalled.
386 1 Dmitry Chernyak
*NOTE*: networks file must be copied to all cluster nodes (not automated yet).
387 1 Dmitry Chernyak
388 1 Dmitry Chernyak
h2. SCI OPERATIONS
389 1 Dmitry Chernyak
390 1 Dmitry Chernyak
Read [[OPERATIONS]] next.