Проект

Общее

Профиль

INSTALL » История » Версия 38

Dmitry Chernyak, 16.12.2016 00:22

1 1 Dmitry Chernyak
h1. INSTALL
2 1 Dmitry Chernyak
3 1 Dmitry Chernyak
{{toc}}
4 1 Dmitry Chernyak
5 38 Dmitry Chernyak
[[OVERVIEW]] | [[INSTALL]] | [[OPERATIONS]] | [[LICENSE]]
6 38 Dmitry Chernyak
[[ОБЗОР]] | [[УСТАНОВКА]] | [[ОПЕРАЦИИ]] | [[ЛИЦЕНЗИЯ]]
7 1 Dmitry Chernyak
8 20 Dmitry Chernyak
h2. Get the ISO image
9 4 Владимир Ипатов
10 30 Dmitry Chernyak
Download the distribution disk, ready to install: *"Download ISO-image":https://sci.skycover.ru/projects/sci-cd/documents*
11 30 Dmitry Chernyak
To access it you should "register":https://sci.skycover.ru/account/register
12 1 Dmitry Chernyak
13 32 Dmitry Chernyak
h2. Burn the ISO-image on the disk or prepare a bootable flash-drive
14 32 Dmitry Chernyak
15 32 Dmitry Chernyak
You can burn the ISO-image using writeable CD-ROM and any available program for burning disks.
16 32 Dmitry Chernyak
17 32 Dmitry Chernyak
You can prepare a bootable flash-drive. For this use any available tool, for example, unetbootin.
18 32 Dmitry Chernyak
When it will ask for "type of a system", you should set Debian of version ... and set the path to the ISO-image file.
19 32 Dmitry Chernyak
20 32 Dmitry Chernyak
You can also th write ISO-image directly to flash-drive:
21 32 Dmitry Chernyak
22 32 Dmitry Chernyak
<pre>
23 32 Dmitry Chernyak
dd if=/path/to/iso of=/dev/sdX bs=4k
24 32 Dmitry Chernyak
</pre>
25 32 Dmitry Chernyak
Where /dev/sdX - path to the block device file. which is pointing to a flash.
26 32 Dmitry Chernyak
(To find which block device you need, insert the flash-drive and run "dmesg" in terminal - at the tail of the output you will see the information about the flash-drive plugged in).
27 32 Dmitry Chernyak
.
28 32 Dmitry Chernyak
29 32 Dmitry Chernyak
h2. Minimal System Requirements
30 32 Dmitry Chernyak
31 32 Dmitry Chernyak
In real case you must choose hardware in according to real objectives.
32 32 Dmitry Chernyak
For testing purposes minimal system requierements is:
33 32 Dmitry Chernyak
* 2 GB RAM
34 32 Dmitry Chernyak
* 50GB HDD
35 32 Dmitry Chernyak
* 1Gbit ethernet for DRBD link (if you setup 2 or more nodes)
36 32 Dmitry Chernyak
* Hardware Virtualization support: Intel VT or AMD CPU (if you want to run the Windows instances)
37 32 Dmitry Chernyak
38 32 Dmitry Chernyak
For enterprise use both nodes mush have the same configuration (CPU power, RAM, volume and speed of disk system).
39 32 Dmitry Chernyak
40 32 Dmitry Chernyak
h2. Nodes setup
41 32 Dmitry Chernyak
42 32 Dmitry Chernyak
Before setting up, link nodes to LAN with a cable.
43 32 Dmitry Chernyak
Next boot the first node from installation image.
44 32 Dmitry Chernyak
During the installation process:
45 32 Dmitry Chernyak
46 32 Dmitry Chernyak
h3. Setup the LAN connection
47 32 Dmitry Chernyak
48 32 Dmitry Chernyak
Specify a static IP address for the LAN connection
49 32 Dmitry Chernyak
50 32 Dmitry Chernyak
!ip.JPG!
51 32 Dmitry Chernyak
52 32 Dmitry Chernyak
Next specify the network mask, gateway and DNS-server for this connection.
53 32 Dmitry Chernyak
54 32 Dmitry Chernyak
h3. Specify the node's name
55 32 Dmitry Chernyak
56 32 Dmitry Chernyak
In the Hostname field specify the node's name, for examlpe: gnt1 or gnt-1.
57 32 Dmitry Chernyak
"1" means this will be the first node of a cluster.
58 32 Dmitry Chernyak
59 32 Dmitry Chernyak
!hostname.JPG!
60 32 Dmitry Chernyak
61 33 Dmitry Chernyak
Specify LAN domain in the Domain field
62 32 Dmitry Chernyak
63 32 Dmitry Chernyak
!domain.JPG!
64 32 Dmitry Chernyak
65 33 Dmitry Chernyak
h3. Specify root password
66 32 Dmitry Chernyak
67 33 Dmitry Chernyak
Do not use too weak password!
68 32 Dmitry Chernyak
69 33 Dmitry Chernyak
h3. Do a disk partitioning
70 32 Dmitry Chernyak
71 33 Dmitry Chernyak
The installer will present several types of automatic partitioning.
72 32 Dmitry Chernyak
73 34 Dmitry Chernyak
If you want to use Software RAID, choose
74 33 Dmitry Chernyak
*2(4,6,8) disk with lvm* - in relation to number of hard disks.
75 33 Dmitry Chernyak
For two disks RAID1 will be used, for more disks RAID10 will be used.
76 33 Dmitry Chernyak
RAID10 - the recommended RAID level to use along with virtualization.
77 32 Dmitry Chernyak
78 33 Dmitry Chernyak
If you use hardware RAID, choose
79 32 Dmitry Chernyak
*1 disk with lvm*
80 32 Dmitry Chernyak
81 33 Dmitry Chernyak
If you have a server with two types of disks, for example, 2 SATA drives and 8 SAS drives (bare or under hardware RAID), we suggesting to do a initial setup at whole on the SATA drives with "*2 disk with lvm*" template, and after initialization of a cluster, add SAS disks manually as additional LVM VG.
82 33 Dmitry Chernyak
This procedure is described in [[OPERATIONS]].
83 32 Dmitry Chernyak
84 32 Dmitry Chernyak
!disk.JPG!
85 32 Dmitry Chernyak
86 33 Dmitry Chernyak
If you encounter the questions about deleting the old partitions and abut a RAID creation, then confirm this.
87 32 Dmitry Chernyak
88 34 Dmitry Chernyak
h3. Specify drive where to install grub boot loader.
89 1 Dmitry Chernyak
90 34 Dmitry Chernyak
Check all underlying physical drives (not partitions and not Software RAID volumes!), where the system was installed.
91 32 Dmitry Chernyak
92 1 Dmitry Chernyak
!grub.JPG!
93 32 Dmitry Chernyak
94 34 Dmitry Chernyak
h3. Finish install - reboot.
95 32 Dmitry Chernyak
96 34 Dmitry Chernyak
h3. Setup the second node in the same manner
97 32 Dmitry Chernyak
98 34 Dmitry Chernyak
Specify the node's name, for example, gnt2 or gnt-2 (like the first node).
99 34 Dmitry Chernyak
Spesify the same root password as on the first node.
100 32 Dmitry Chernyak
101 34 Dmitry Chernyak
h2. Setup the time
102 32 Dmitry Chernyak
103 34 Dmitry Chernyak
Check that both nodes shows the same time.
104 32 Dmitry Chernyak
105 32 Dmitry Chernyak
<pre>
106 32 Dmitry Chernyak
# date
107 32 Dmitry Chernyak
Thu Mar 12 12:23:10 MSK 2015
108 32 Dmitry Chernyak
</pre>
109 32 Dmitry Chernyak
110 34 Dmitry Chernyak
If not so, the set it with te following command:
111 32 Dmitry Chernyak
112 32 Dmitry Chernyak
<pre>
113 32 Dmitry Chernyak
# date -s "12 MAR 2015 12:23:00"
114 32 Dmitry Chernyak
</pre>
115 32 Dmitry Chernyak
116 35 Dmitry Chernyak
h2. Configure a backbone (the internal link between nodes)
117 32 Dmitry Chernyak
118 35 Dmitry Chernyak
Do not plug off the nodes from the LAN.
119 35 Dmitry Chernyak
Link the nodes with a second cable via their free Gigabit network adapters, check that these adapters "link" LEDs are lit (if the LEDs are present).
120 35 Dmitry Chernyak
This interlink will be used for disk data sychronization and internal cluster communications.
121 35 Dmitry Chernyak
If some oter ethernet cables, other than backbone and LAN) are connected to the nodet at this moment, then you should plug them off for the time of backbone setup.
122 35 Dmitry Chernyak
After the links arise, run the following comand on the each node (you can do it in parallel or sequental):
123 32 Dmitry Chernyak
<pre>
124 32 Dmitry Chernyak
sci-setup backbone
125 32 Dmitry Chernyak
</pre>
126 32 Dmitry Chernyak
127 35 Dmitry Chernyak
The result:
128 32 Dmitry Chernyak
<pre>
129 32 Dmitry Chernyak
root@gnt-1:~# sci-setup backbone
130 32 Dmitry Chernyak
Node number: 1
131 32 Dmitry Chernyak
LAN interface: eth0
132 32 Dmitry Chernyak
Waiting 30 seconds for links to be up
133 32 Dmitry Chernyak
Backbone interface: eth3
134 32 Dmitry Chernyak
Up and test backbone
135 32 Dmitry Chernyak
136 32 Dmitry Chernyak
Waiting for backbone to get ready (MAXWAIT is 2 seconds).
137 32 Dmitry Chernyak
          inet addr:10.101.200.11  Bcast:10.101.200.255  Mask:255.255.255.0
138 32 Dmitry Chernyak
ok.
139 32 Dmitry Chernyak
</pre>
140 32 Dmitry Chernyak
141 32 Dmitry Chernyak
142 35 Dmitry Chernyak
After setting up it on both nodes, check the link. Run the following command on the first node:
143 32 Dmitry Chernyak
<pre>
144 32 Dmitry Chernyak
ping 10.100.200.12
145 32 Dmitry Chernyak
</pre>
146 32 Dmitry Chernyak
147 32 Dmitry Chernyak
<pre>
148 32 Dmitry Chernyak
root@gnt-1:~# ping 10.101.200.12
149 32 Dmitry Chernyak
PING 10.101.200.12 (10.101.200.12) 56(84) bytes of data.
150 32 Dmitry Chernyak
64 bytes from 10.101.200.12: icmp_req=1 ttl=64 time=0.263 ms
151 32 Dmitry Chernyak
64 bytes from 10.101.200.12: icmp_req=2 ttl=64 time=0.112 ms
152 32 Dmitry Chernyak
^C
153 32 Dmitry Chernyak
--- 10.101.200.12 ping statistics ---
154 32 Dmitry Chernyak
2 packets transmitted, 2 received, 0% packet loss, time 999ms
155 32 Dmitry Chernyak
rtt min/avg/max/mdev = 0.112/0.187/0.263/0.076 ms
156 32 Dmitry Chernyak
</pre>
157 32 Dmitry Chernyak
158 36 Dmitry Chernyak
h2. Intialize the cluster
159 32 Dmitry Chernyak
160 36 Dmitry Chernyak
On the first node run
161 32 Dmitry Chernyak
<pre>
162 32 Dmitry Chernyak
sci-setup cluster
163 32 Dmitry Chernyak
</pre>
164 32 Dmitry Chernyak
165 36 Dmitry Chernyak
The configuration wizard will ask for the address of a cluster in the LAN:
166 32 Dmitry Chernyak
<pre>
167 32 Dmitry Chernyak
root@gnt-1:~# sci-setup cluster
168 32 Dmitry Chernyak
Cluster domain name will be gnt.
169 32 Dmitry Chernyak
Cluster IP will be 10.101.200.10 on the interlink.
170 32 Dmitry Chernyak
We recommend to set it to some unbound LAN IP address,
171 32 Dmitry Chernyak
but it is safe to simply press ENTER.
172 32 Dmitry Chernyak
Set cluster IP [10.101.200.10]:
173 32 Dmitry Chernyak
</pre> 
174 32 Dmitry Chernyak
175 36 Dmitry Chernyak
If at this moment you will specify any free static IP-address in the LAN, then in the future you will be able to connect the cluster controlling module via this IP.
176 36 Dmitry Chernyak
It is useful, but not obligate and not influent on your ability to control the cluuster. You can simply press ENTER and the controlling module will take the address from the backbone.
177 32 Dmitry Chernyak
178 36 Dmitry Chernyak
Configuration wizard will check ping to the second node and will ask you to accept it's ssh key and to enter the root password to it to retrieve and check it's configuration parameters.
179 32 Dmitry Chernyak
<pre>
180 32 Dmitry Chernyak
Connecting to Node2 via 10.101.200.12
181 32 Dmitry Chernyak
You will be prompted for a root password...
182 32 Dmitry Chernyak
183 32 Dmitry Chernyak
The authenticity of host '10.101.200.12 (10.101.200.12)' can't be established.
184 32 Dmitry Chernyak
ECDSA key fingerprint is 6a:5a:78:fa:af:c1:23:97:87:9f:66:46:94:7e:f2:f5.
185 32 Dmitry Chernyak
Are you sure you want to continue connecting (yes/no)?
186 32 Dmitry Chernyak
</pre> 
187 32 Dmitry Chernyak
188 36 Dmitry Chernyak
Enter "yes"
189 32 Dmitry Chernyak
<pre>
190 32 Dmitry Chernyak
root@10.101.200.12's password:
191 32 Dmitry Chernyak
</pre>
192 32 Dmitry Chernyak
193 36 Dmitry Chernyak
The password to the second node.
194 32 Dmitry Chernyak
195 37 Dmitry Chernyak
After all checks will succeed, the wizard will print the cluster's configuration parameters:
196 32 Dmitry Chernyak
<pre>
197 32 Dmitry Chernyak
########################################
198 32 Dmitry Chernyak
Parameters detected:
199 32 Dmitry Chernyak
Domain name: example.sci
200 32 Dmitry Chernyak
201 32 Dmitry Chernyak
Master network interface: backbone
202 32 Dmitry Chernyak
203 32 Dmitry Chernyak
Cluster name: gnt
204 32 Dmitry Chernyak
Cluster IP: 10.101.200.10
205 32 Dmitry Chernyak
206 32 Dmitry Chernyak
Node 1 name: gnt-1
207 32 Dmitry Chernyak
Node 1 IP: 10.101.200.11
208 32 Dmitry Chernyak
Node 1 LAN IP: 192.168.11.28
209 32 Dmitry Chernyak
210 32 Dmitry Chernyak
Node 2 name: gnt-2
211 32 Dmitry Chernyak
Node 2 IP: 10.101.200.12
212 32 Dmitry Chernyak
Node 2 LAN IP: 192.168.11.29
213 32 Dmitry Chernyak
Proceed with cluster creation [y/n]?
214 32 Dmitry Chernyak
</pre>
215 32 Dmitry Chernyak
216 36 Dmitry Chernyak
If all is right, enter "y" and press ENTER to create a cluster.
217 32 Dmitry Chernyak
218 32 Dmitry Chernyak
<pre>
219 32 Dmitry Chernyak
Refilling sci.conf
220 32 Dmitry Chernyak
Creating empty /root/.ssh
221 32 Dmitry Chernyak
Fullfilling /etc/hosts
222 32 Dmitry Chernyak
Fulfilling default /etc/ganeti/networks
223 32 Dmitry Chernyak
Set random vnc password for cluster: miotaigh
224 32 Dmitry Chernyak
add sci repo in apt sources
225 32 Dmitry Chernyak
Initializing cluster
226 32 Dmitry Chernyak
Tuning cluster
227 32 Dmitry Chernyak
Adding the second node
228 32 Dmitry Chernyak
-- WARNING -- 
229 32 Dmitry Chernyak
Performing this operation is going to replace the ssh daemon keypair
230 32 Dmitry Chernyak
on the target machine (gnt-2.example.sci) with the ones of the current one
231 32 Dmitry Chernyak
and grant full intra-cluster ssh root access to/from it
232 32 Dmitry Chernyak
233 32 Dmitry Chernyak
The authenticity of host 'gnt-2.example.sci (10.101.200.12)' can't be established.
234 32 Dmitry Chernyak
ECDSA key fingerprint is 6a:5a:78:fa:af:c1:23:97:87:9f:66:46:94:7e:f2:f5.
235 32 Dmitry Chernyak
Are you sure you want to continue connecting (yes/no)?
236 32 Dmitry Chernyak
</pre>
237 32 Dmitry Chernyak
238 32 Dmitry Chernyak
239 36 Dmitry Chernyak
During the process of adding the second node into a cluster, you will be asked again to accept it's ssh key and to enter the root password.
240 36 Dmitry Chernyak
Enter "yes" and then the password to the second node.
241 32 Dmitry Chernyak
242 32 Dmitry Chernyak
243 36 Dmitry Chernyak
At the end you will see the output of a cluster disgnostic command:
244 32 Dmitry Chernyak
<pre>
245 32 Dmitry Chernyak
VTue Jun 28 18:37:06 2016 * Verifying cluster config
246 32 Dmitry Chernyak
Tue Jun 28 18:37:06 2016 * Verifying cluster certificate files
247 32 Dmitry Chernyak
Tue Jun 28 18:37:06 2016 * Verifying hypervisor parameters
248 32 Dmitry Chernyak
Tue Jun 28 18:37:07 2016 * Verifying all nodes belong to an existing group
249 32 Dmitry Chernyak
Waiting for job 10 ...
250 32 Dmitry Chernyak
Tue Jun 28 18:37:07 2016 * Verifying group 'default'
251 32 Dmitry Chernyak
Tue Jun 28 18:37:08 2016 * Gathering data (2 nodes)
252 32 Dmitry Chernyak
Tue Jun 28 18:37:09 2016 * Gathering disk information (2 nodes)
253 32 Dmitry Chernyak
Tue Jun 28 18:37:09 2016 * Verifying configuration file consistency
254 32 Dmitry Chernyak
Tue Jun 28 18:37:09 2016 * Verifying node status
255 32 Dmitry Chernyak
Tue Jun 28 18:37:09 2016 * Verifying instance status
256 32 Dmitry Chernyak
Tue Jun 28 18:37:10 2016 * Verifying orphan volumes
257 32 Dmitry Chernyak
Tue Jun 28 18:37:10 2016 * Verifying N+1 Memory redundancy
258 32 Dmitry Chernyak
Tue Jun 28 18:37:10 2016 * Other Notes
259 32 Dmitry Chernyak
Tue Jun 28 18:37:10 2016 * Hooks Results
260 32 Dmitry Chernyak
Node              DTotal DFree MTotal MNode MFree Pinst Sinst
261 32 Dmitry Chernyak
gnt-1.example.sci 101.2G 82.2G   3.9G  1.5G  2.4G     0     0
262 32 Dmitry Chernyak
gnt-2.example.sci 101.3G 81.3G   3.9G  1.5G  2.4G     0     0
263 32 Dmitry Chernyak
If all is ok, proceed with sci-setup sci
264 32 Dmitry Chernyak
</pre>
265 32 Dmitry Chernyak
266 37 Dmitry Chernyak
h2. Create the controlling virtual machine
267 32 Dmitry Chernyak
268 37 Dmitry Chernyak
On the first node run
269 32 Dmitry Chernyak
<pre>
270 32 Dmitry Chernyak
sci-setup sci
271 32 Dmitry Chernyak
</pre>
272 32 Dmitry Chernyak
273 37 Dmitry Chernyak
If you wish that the internal cluster's DNS will use your company's DNS as the forwarders (i.e. ask them to resolve external Intternet addresses), then run ttis command in the following way:
274 32 Dmitry Chernyak
<pre>
275 32 Dmitry Chernyak
sci-setup sci -d
276 32 Dmitry Chernyak
</pre>
277 32 Dmitry Chernyak
278 37 Dmitry Chernyak
Without @-d@ the internal cluster's DNS will resolve the internet addresses directly via root Internet servers.
279 32 Dmitry Chernyak
280 37 Dmitry Chernyak
The configuration wizard will ask you to specify the address of controlling virtual machime in the LAN:
281 32 Dmitry Chernyak
<pre>
282 32 Dmitry Chernyak
root@gnt-1:~# sci-setup sci
283 32 Dmitry Chernyak
Set sci LAN IP or enter "none" and press ENTER: 
284 32 Dmitry Chernyak
</pre> 
285 32 Dmitry Chernyak
286 37 Dmitry Chernyak
Specify any free static IP in the LAN (192.168.11.2 in the example above).
287 32 Dmitry Chernyak
288 37 Dmitry Chernyak
After all checks will succeed, the wizard will print the controlling VM configuration parameters:
289 32 Dmitry Chernyak
<pre>
290 32 Dmitry Chernyak
Creating service machine sci
291 32 Dmitry Chernyak
IP: 10.101.200.2 on backbone
292 32 Dmitry Chernyak
Second network device: lan
293 32 Dmitry Chernyak
Second network IP: 192.168.11.2
294 32 Dmitry Chernyak
Proceed with sci VM creation [y/n]?
295 32 Dmitry Chernyak
</pre>
296 32 Dmitry Chernyak
297 37 Dmitry Chernyak
If all is right, enter "y" and press ENTER to create VM.
298 32 Dmitry Chernyak
<pre>
299 32 Dmitry Chernyak
Adding sci to /etc/hosts
300 32 Dmitry Chernyak
Tue Jun 28 18:44:02 2016 * creating instance disks...
301 32 Dmitry Chernyak
Tue Jun 28 18:44:09 2016 adding instance sci to cluster config
302 32 Dmitry Chernyak
Tue Jun 28 18:44:13 2016  - INFO: Waiting for instance sci to sync disks
303 32 Dmitry Chernyak
Tue Jun 28 18:44:13 2016  - INFO: - device disk/0:  2.10% done, 2m 27s remaining (estimated)
304 32 Dmitry Chernyak
Tue Jun 28 18:45:13 2016  - INFO: - device disk/0: 39.90% done, 1m 31s remaining (estimated)
305 32 Dmitry Chernyak
Tue Jun 28 18:46:14 2016  - INFO: - device disk/0: 78.20% done, 34s remaining (estimated)
306 32 Dmitry Chernyak
Tue Jun 28 18:46:48 2016  - INFO: - device disk/0: 100.00% done, 0s remaining (estimated)
307 32 Dmitry Chernyak
Tue Jun 28 18:46:48 2016  - INFO: Instance sci's disks are in sync
308 32 Dmitry Chernyak
Tue Jun 28 18:46:48 2016 * running the instance OS create scripts...
309 32 Dmitry Chernyak
Tue Jun 28 18:49:42 2016 * starting instance...
310 32 Dmitry Chernyak
</pre>
311 32 Dmitry Chernyak
312 37 Dmitry Chernyak
h2. Congratulations! You have just create the first virtual machine in your cluster!
313 32 Dmitry Chernyak
314 37 Dmitry Chernyak
After statting, sci VM automatically issue the finetuning procedures of a cluster nodes and becomes the DNS server for them. This operation takes about 5-10 minutes.
315 23 Dmitry Chernyak
316 37 Dmitry Chernyak
Try following commands:
317 23 Dmitry Chernyak
<pre>
318 23 Dmitry Chernyak
gnt-instance list
319 23 Dmitry Chernyak
gnt-instance info sci
320 23 Dmitry Chernyak
gnt-cluster verify
321 23 Dmitry Chernyak
ssh sci
322 25 Dmitry Chernyak
</pre>
323 23 Dmitry Chernyak
324 37 Dmitry Chernyak
h2. Operation
325 23 Dmitry Chernyak
326 37 Dmitry Chernyak
How to control the cluster and how to create the new virtual machines read [[OPERATIONS]]
327 23 Dmitry Chernyak
328 1 Dmitry Chernyak
----
329 31 Dmitry Chernyak
330 31 Dmitry Chernyak
[[SETUP for versions 2.3 and earlier]]