Проект

Общее

Профиль

SETUP » История » Версия 1

Dmitry Chernyak, 26.02.2011 22:12

1 1 Dmitry Chernyak
h1. SETUP
2 1 Dmitry Chernyak
3 1 Dmitry Chernyak
{{toc}}
4 1 Dmitry Chernyak
5 1 Dmitry Chernyak
Ensure both nodes are up.
6 1 Dmitry Chernyak
7 1 Dmitry Chernyak
If you planning to use the secondary network for SAN and DRBD synchronization, you
8 1 Dmitry Chernyak
should configure secondary IP interfaces manually on both nodes at this time.
9 1 Dmitry Chernyak
10 1 Dmitry Chernyak
Log in to the first node via ssh. Due to lack of DNS there may be
11 1 Dmitry Chernyak
a minute timeout before password prompt.
12 1 Dmitry Chernyak
13 1 Dmitry Chernyak
h2. DEFINING ENVIRONMENT
14 1 Dmitry Chernyak
15 1 Dmitry Chernyak
Edit @/etc/sci/sci.conf@
16 1 Dmitry Chernyak
17 1 Dmitry Chernyak
* You should specify node1 and node2 data as you have installed them.
18 1 Dmitry Chernyak
*NOTE*: You can setup the cluster even with one node. In this case just leave NODE2_
19 1 Dmitry Chernyak
lines as is. In fact this is a dangerous setup, so you will be warned about this duging
20 1 Dmitry Chernyak
the procedures.
21 1 Dmitry Chernyak
22 1 Dmitry Chernyak
* You should specify the cluster's name and IP.
23 1 Dmitry Chernyak
24 1 Dmitry Chernyak
* NODE#_SAN_IP should be specified on both nodes or none.
25 1 Dmitry Chernyak
26 1 Dmitry Chernyak
* If you haven't Internet uplink or have a local package mirrors, you should correct
27 1 Dmitry Chernyak
APT_ - settings.
28 1 Dmitry Chernyak
29 1 Dmitry Chernyak
* If you need to uplink to the DNS hierarchy other than root hint zones, specify DNS_FORWARDERS
30 1 Dmitry Chernyak
(note trailing ';').
31 1 Dmitry Chernyak
32 1 Dmitry Chernyak
h2. SETUP CLUSTER
33 1 Dmitry Chernyak
34 1 Dmitry Chernyak
Issue:
35 1 Dmitry Chernyak
36 1 Dmitry Chernyak
<pre>
37 1 Dmitry Chernyak
# sci-setup cluster
38 1 Dmitry Chernyak
</pre>
39 1 Dmitry Chernyak
40 1 Dmitry Chernyak
Check and confirm settings printed.
41 1 Dmitry Chernyak
42 1 Dmitry Chernyak
The process will go on.
43 1 Dmitry Chernyak
44 1 Dmitry Chernyak
Next you will be prompted to accept ssh key from node2 and for the root's password to node2.
45 1 Dmitry Chernyak
46 1 Dmitry Chernyak
On finish you will look something like this:
47 1 Dmitry Chernyak
48 1 Dmitry Chernyak
<pre>
49 1 Dmitry Chernyak
Verify
50 1 Dmitry Chernyak
Wed Jan 12 15:36:10 2011 * Verifying global settings
51 1 Dmitry Chernyak
Wed Jan 12 15:36:10 2011 * Gathering data (1 nodes)
52 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying node status
53 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying instance status
54 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying orphan volumes
55 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying orphan instances
56 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Verifying N+1 Memory redundancy
57 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Other Notes
58 1 Dmitry Chernyak
Wed Jan 12 15:36:11 2011 * Hooks Results
59 1 Dmitry Chernyak
Node                    DTotal  DFree MTotal MNode MFree Pinst Sinst
60 1 Dmitry Chernyak
gnt1.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
61 1 Dmitry Chernyak
gnt2.ganeti.example.org 100.0G 100.0G  1020M  379M  625M     0     0
62 1 Dmitry Chernyak
If all is ok, proceed with /usr/local/sbin/sci-setup service
63 1 Dmitry Chernyak
</pre>
64 1 Dmitry Chernyak
65 1 Dmitry Chernyak
h2. SETUP SERVICE INSTANCE
66 1 Dmitry Chernyak
67 1 Dmitry Chernyak
The service instance is named 'sci' and have a few aliases.
68 1 Dmitry Chernyak
On setup, it's IP address is determined from @/etc/resolv.conf@ of your first node.
69 1 Dmitry Chernyak
This instance will be hardcoded in @/etc/hosts@ file of all cluster nodes and instances.
70 1 Dmitry Chernyak
71 1 Dmitry Chernyak
Issue:
72 1 Dmitry Chernyak
73 1 Dmitry Chernyak
<pre>
74 1 Dmitry Chernyak
# sci-setup service
75 1 Dmitry Chernyak
</pre>
76 1 Dmitry Chernyak
77 1 Dmitry Chernyak
You'll see the progress of DRBD syncing disks, then the message
78 1 Dmitry Chernyak
<pre>
79 1 Dmitry Chernyak
* running the instance OS create scripts...
80 1 Dmitry Chernyak
</pre>
81 1 Dmitry Chernyak
appears. The further may take a while. The process finishes with
82 1 Dmitry Chernyak
<pre>
83 1 Dmitry Chernyak
* starting instance...
84 1 Dmitry Chernyak
</pre>
85 1 Dmitry Chernyak
message.
86 1 Dmitry Chernyak
87 1 Dmitry Chernyak
Now you can log on to the sci instance using:
88 1 Dmitry Chernyak
89 1 Dmitry Chernyak
<pre>
90 1 Dmitry Chernyak
# gnt-instance console sci
91 1 Dmitry Chernyak
</pre>
92 1 Dmitry Chernyak
93 1 Dmitry Chernyak
Log in as root, the password is empty.
94 1 Dmitry Chernyak
*NOTE*: Due to empty password all remote connections to new instance is prohibited.
95 1 Dmitry Chernyak
You should change password and install @openssh-server@ package manually after
96 1 Dmitry Chernyak
successful bootstrap procedure.
97 1 Dmitry Chernyak
98 1 Dmitry Chernyak
h2. SERVICE INSTANCE BOOTSTRAP
99 1 Dmitry Chernyak
100 1 Dmitry Chernyak
The system will setup itself via puppet. This is the iterative process. You can monitor
101 1 Dmitry Chernyak
it by looking into @/var/log/daemon.log@. At start there is no @less@ command yet, so
102 1 Dmitry Chernyak
you can use @more@, @cat@, @tail@ or @tail -f@ until @less@ will be auto-installed.
103 1 Dmitry Chernyak
104 1 Dmitry Chernyak
By default the iterations are repeated in 20 minutes. To shorten the wait time you can
105 1 Dmitry Chernyak
issue
106 1 Dmitry Chernyak
107 1 Dmitry Chernyak
<pre>
108 1 Dmitry Chernyak
# /etc/init.d/puppet restart
109 1 Dmitry Chernyak
</pre>
110 1 Dmitry Chernyak
111 1 Dmitry Chernyak
and then look into @daemon.log@ how it finishes.
112 1 Dmitry Chernyak
113 1 Dmitry Chernyak
Repeat this a few times until puppet will do nothing in turn.
114 1 Dmitry Chernyak
115 1 Dmitry Chernyak
h2. PREPARING FOR NEW INSTANCES
116 1 Dmitry Chernyak
117 1 Dmitry Chernyak
New instances are created just by regular Ganeti commands such as:
118 1 Dmitry Chernyak
119 1 Dmitry Chernyak
<pre>
120 1 Dmitry Chernyak
gnt-instance add -t drbd -o debootstrap+default -s 10g -B memory=256m -n NODE1_NAME:NODE2_NAME INSTANCE_NAME
121 1 Dmitry Chernyak
</pre>
122 1 Dmitry Chernyak
123 1 Dmitry Chernyak
Altought, some tuning hooks are provided by SCI-CD project:
124 1 Dmitry Chernyak
# Each instance has installed @puppet@ for autoconfiguration and @openssh-client@ for file transfers etc.
125 1 Dmitry Chernyak
# The instance uses pygrub to boot kernel from /vmlinuz & Co on the innstance's own disk.
126 1 Dmitry Chernyak
# The instance's network interfaces may be set up automatically as described below.
127 1 Dmitry Chernyak
128 1 Dmitry Chernyak
h3. INSTANCE INTERFACE AUTOCONFIGURATION
129 1 Dmitry Chernyak
130 1 Dmitry Chernyak
If your instances may sit on several networks and you need static addressing in them, you should fulfill
131 1 Dmitry Chernyak
the file @/etc/ganeti/networks@ with all known networks you want to attach your instances.
132 1 Dmitry Chernyak
Each line in the file has format
133 1 Dmitry Chernyak
134 1 Dmitry Chernyak
|NETWORK|NETMASK|BROADCAST|GATEWAY|
135 1 Dmitry Chernyak
136 1 Dmitry Chernyak
Ganeti instance debootstrap hook looks in this file for the network, mathing the address of bootstraped
137 1 Dmitry Chernyak
instance and fulfill it's @/etc/network/interfaces@ accordingly.
138 1 Dmitry Chernyak
139 1 Dmitry Chernyak
*NOTE*: If you have only one default network, you shouldn't care because it's data are preinstalled.
140 1 Dmitry Chernyak
*NOTE*: networks file must be copied to all cluster nodes (not automated yet).
141 1 Dmitry Chernyak
142 1 Dmitry Chernyak
h2. SCI OPERATIONS
143 1 Dmitry Chernyak
144 1 Dmitry Chernyak
Read [[OPERATIONS]] next.