INSTALL » История » Версия 42
Владимир Ипатов, 03.03.2018 19:11
1 | 1 | Dmitry Chernyak | h1. INSTALL |
---|---|---|---|
2 | 1 | Dmitry Chernyak | |
3 | 1 | Dmitry Chernyak | {{toc}} |
4 | 1 | Dmitry Chernyak | |
5 | 38 | Dmitry Chernyak | [[OVERVIEW]] | [[INSTALL]] | [[OPERATIONS]] | [[LICENSE]] |
6 | 38 | Dmitry Chernyak | [[ОБЗОР]] | [[УСТАНОВКА]] | [[ОПЕРАЦИИ]] | [[ЛИЦЕНЗИЯ]] |
7 | 1 | Dmitry Chernyak | |
8 | 20 | Dmitry Chernyak | h2. Get the ISO image |
9 | 4 | Владимир Ипатов | |
10 | 30 | Dmitry Chernyak | Download the distribution disk, ready to install: *"Download ISO-image":https://sci.skycover.ru/projects/sci-cd/documents* |
11 | 30 | Dmitry Chernyak | To access it you should "register":https://sci.skycover.ru/account/register |
12 | 1 | Dmitry Chernyak | |
13 | 32 | Dmitry Chernyak | h2. Burn the ISO-image on the disk or prepare a bootable flash-drive |
14 | 32 | Dmitry Chernyak | |
15 | 39 | Владимир Ипатов | h3. Windows |
16 | 39 | Владимир Ипатов | |
17 | 32 | Dmitry Chernyak | You can burn the ISO-image using writeable CD-ROM and any available program for burning disks. |
18 | 32 | Dmitry Chernyak | |
19 | 39 | Владимир Ипатов | You can also prepare a bootable flash using program Rufus: |
20 | 39 | Владимир Ипатов | In rufus interface choose in "Device" your flash drive. |
21 | 39 | Владимир Ипатов | Create a bootable disk using : DD-image and tap on CD-ROM icon and choose ISO file. |
22 | 39 | Владимир Ипатов | |
23 | 39 | Владимир Ипатов | h3. Linux |
24 | 39 | Владимир Ипатов | |
25 | 39 | Владимир Ипатов | You can burn the ISO-image using writeable CD-ROM and any available program for burning disks. |
26 | 39 | Владимир Ипатов | |
27 | 32 | Dmitry Chernyak | You can prepare a bootable flash-drive. For this use any available tool, for example, unetbootin. |
28 | 32 | Dmitry Chernyak | When it will ask for "type of a system", you should set Debian of version ... and set the path to the ISO-image file. |
29 | 32 | Dmitry Chernyak | |
30 | 32 | Dmitry Chernyak | You can also th write ISO-image directly to flash-drive: |
31 | 32 | Dmitry Chernyak | |
32 | 32 | Dmitry Chernyak | <pre> |
33 | 32 | Dmitry Chernyak | dd if=/path/to/iso of=/dev/sdX bs=4k |
34 | 32 | Dmitry Chernyak | </pre> |
35 | 32 | Dmitry Chernyak | Where /dev/sdX - path to the block device file. which is pointing to a flash. |
36 | 32 | Dmitry Chernyak | (To find which block device you need, insert the flash-drive and run "dmesg" in terminal - at the tail of the output you will see the information about the flash-drive plugged in). |
37 | 42 | Владимир Ипатов | |
38 | 32 | Dmitry Chernyak | |
39 | 32 | Dmitry Chernyak | h2. Minimal System Requirements |
40 | 32 | Dmitry Chernyak | |
41 | 32 | Dmitry Chernyak | In real case you must choose hardware in according to real objectives. |
42 | 1 | Dmitry Chernyak | For testing purposes minimal system requierements is: |
43 | 39 | Владимир Ипатов | * 4 GB RAM |
44 | 32 | Dmitry Chernyak | * 50GB HDD |
45 | 32 | Dmitry Chernyak | * 1Gbit ethernet for DRBD link (if you setup 2 or more nodes) |
46 | 32 | Dmitry Chernyak | * Hardware Virtualization support: Intel VT or AMD CPU (if you want to run the Windows instances) |
47 | 32 | Dmitry Chernyak | |
48 | 1 | Dmitry Chernyak | For enterprise use both nodes mush have the same configuration (CPU power, RAM, volume and speed of disk system). |
49 | 1 | Dmitry Chernyak | |
50 | 39 | Владимир Ипатов | h1. System install |
51 | 1 | Dmitry Chernyak | |
52 | 39 | Владимир Ипатов | h2. First node setup |
53 | 39 | Владимир Ипатов | |
54 | 39 | Владимир Ипатов | Before setting up, link nodes to LAN with a cable. *Detach all other network cables during setup.* |
55 | 1 | Dmitry Chernyak | Next boot the first node from installation image. |
56 | 39 | Владимир Ипатов | In welcome scrin choose Install. |
57 | 1 | Dmitry Chernyak | |
58 | 39 | Владимир Ипатов | {{thumbnail(install.png, size=600)}} |
59 | 1 | Dmitry Chernyak | |
60 | 39 | Владимир Ипатов | Specify a static IP address for the LAN connection (you can use any free ip from your LAN. In our case it is 192.168.13.11. Tap <Continue> |
61 | 1 | Dmitry Chernyak | |
62 | 39 | Владимир Ипатов | {{thumbnail(ip_1node.png, size=600)}} |
63 | 1 | Dmitry Chernyak | |
64 | 39 | Владимир Ипатов | Next specify the network mask and tap <Continue>. |
65 | 1 | Dmitry Chernyak | |
66 | 39 | Владимир Ипатов | {{thumbnail(netmask.png, size=600)}} |
67 | 1 | Dmitry Chernyak | |
68 | 39 | Владимир Ипатов | Input your gateway address and tap <Continue> |
69 | 39 | Владимир Ипатов | |
70 | 39 | Владимир Ипатов | {{thumbnail(gateway.png, size=600)}} |
71 | 39 | Владимир Ипатов | |
72 | 39 | Владимир Ипатов | Input your name server address and tap <Continue> |
73 | 39 | Владимир Ипатов | |
74 | 39 | Владимир Ипатов | |
75 | 39 | Владимир Ипатов | After address setting go to node naming. |
76 | 39 | Владимир Ипатов | In SCI nodes names are gnt# (e.g. gnt1, gnt2, etc.) |
77 | 39 | Владимир Ипатов | *NOTICE: Nodes names MUST ending with number on the end, starting from 1.* |
78 | 39 | Владимир Ипатов | |
79 | 39 | Владимир Ипатов | In the Hostname field specify the node's name, gnt1 or gnt-1 |
80 | 32 | Dmitry Chernyak | "1" means this will be the first node of a cluster. |
81 | 32 | Dmitry Chernyak | |
82 | 39 | Владимир Ипатов | {{thumbnail(hostname_1.png, size=600)}} |
83 | 32 | Dmitry Chernyak | |
84 | 39 | Владимир Ипатов | Specify domain name in the Domain field. If you have domain in LAN, specify it, if not, we recommend to set it to "sci" |
85 | 32 | Dmitry Chernyak | |
86 | 40 | Владимир Ипатов | |
87 | 41 | Владимир Ипатов | {{thumbnail(donainsci.png, size=600)}} |
88 | 32 | Dmitry Chernyak | |
89 | 39 | Владимир Ипатов | Next specify root password twice |
90 | 32 | Dmitry Chernyak | Do not use too weak password! |
91 | 34 | Dmitry Chernyak | |
92 | 39 | Владимир Ипатов | {{thumbnail(root_pass.png, size=600)}} |
93 | 1 | Dmitry Chernyak | |
94 | 39 | Владимир Ипатов | After setting root password we go to hard disks partitioning. |
95 | 33 | Dmitry Chernyak | The installer will present several types of automatic partitioning. |
96 | 1 | Dmitry Chernyak | |
97 | 1 | Dmitry Chernyak | If you want to use Software RAID, choose |
98 | 33 | Dmitry Chernyak | *2(4,6,8) disk with lvm* - in relation to number of hard disks. |
99 | 1 | Dmitry Chernyak | For two disks RAID1 will be used, for more disks RAID10 will be used. |
100 | 1 | Dmitry Chernyak | RAID10 - the recommended RAID level to use along with virtualization. |
101 | 1 | Dmitry Chernyak | |
102 | 1 | Dmitry Chernyak | If you use hardware RAID, choose |
103 | 1 | Dmitry Chernyak | *1 disk with lvm* |
104 | 1 | Dmitry Chernyak | |
105 | 1 | Dmitry Chernyak | If you have a server with two types of disks, for example, 2 SATA drives and 8 SAS drives (bare or under hardware RAID), we suggesting to do a initial setup at whole on the SATA drives with "*2 disk with lvm*" template, and after initialization of a cluster, add SAS disks manually as additional LVM VG. |
106 | 33 | Dmitry Chernyak | This procedure is described in [[OPERATIONS]]. |
107 | 32 | Dmitry Chernyak | |
108 | 39 | Владимир Ипатов | {{thumbnail(diskpart.png, size=600)}} |
109 | 32 | Dmitry Chernyak | |
110 | 39 | Владимир Ипатов | If you encounter the questions about deleting the old partitions and abщut software RAID creation, then confirm this. |
111 | 33 | Dmitry Chernyak | |
112 | 39 | Владимир Ипатов | {{thumbnail(diskpart_yes.png, size=600)}} |
113 | 32 | Dmitry Chernyak | |
114 | 39 | Владимир Ипатов | After some packages install, you must choose disks to install GRUB bootloader |
115 | 39 | Владимир Ипатов | |
116 | 33 | Dmitry Chernyak | Check all underlying physical drives (not partitions and not Software RAID volumes!), where the system was installed. |
117 | 32 | Dmitry Chernyak | |
118 | 39 | Владимир Ипатов | {{thumbnail(boot_hdd.png, size=600)}} |
119 | 1 | Dmitry Chernyak | |
120 | 39 | Владимир Ипатов | After bootloader install system will install rest of packages and finishes the install. |
121 | 32 | Dmitry Chernyak | |
122 | 39 | Владимир Ипатов | {{thumbnail(install_coplete.png, size=600)}} |
123 | 39 | Владимир Ипатов | |
124 | 39 | Владимир Ипатов | Eject install media and tap <Continue>. The system will be reboot in installed OS. |
125 | 39 | Владимир Ипатов | |
126 | 39 | Владимир Ипатов | h2. Second node setup |
127 | 32 | Dmitry Chernyak | |
128 | 34 | Dmitry Chernyak | Specify the node's name, for example, gnt2 or gnt-2 (like the first node). |
129 | 34 | Dmitry Chernyak | Spesify the same root password as on the first node. |
130 | 32 | Dmitry Chernyak | |
131 | 34 | Dmitry Chernyak | h2. Setup the time |
132 | 32 | Dmitry Chernyak | |
133 | 34 | Dmitry Chernyak | Check that both nodes shows the same time. |
134 | 32 | Dmitry Chernyak | |
135 | 32 | Dmitry Chernyak | <pre> |
136 | 32 | Dmitry Chernyak | # date |
137 | 32 | Dmitry Chernyak | Thu Mar 12 12:23:10 MSK 2015 |
138 | 32 | Dmitry Chernyak | </pre> |
139 | 32 | Dmitry Chernyak | |
140 | 34 | Dmitry Chernyak | If not so, the set it with te following command: |
141 | 32 | Dmitry Chernyak | |
142 | 32 | Dmitry Chernyak | <pre> |
143 | 32 | Dmitry Chernyak | # date -s "12 MAR 2015 12:23:00" |
144 | 32 | Dmitry Chernyak | </pre> |
145 | 32 | Dmitry Chernyak | |
146 | 35 | Dmitry Chernyak | h2. Configure a backbone (the internal link between nodes) |
147 | 32 | Dmitry Chernyak | |
148 | 35 | Dmitry Chernyak | Do not plug off the nodes from the LAN. |
149 | 35 | Dmitry Chernyak | Link the nodes with a second cable via their free Gigabit network adapters, check that these adapters "link" LEDs are lit (if the LEDs are present). |
150 | 35 | Dmitry Chernyak | This interlink will be used for disk data sychronization and internal cluster communications. |
151 | 35 | Dmitry Chernyak | If some oter ethernet cables, other than backbone and LAN) are connected to the nodet at this moment, then you should plug them off for the time of backbone setup. |
152 | 35 | Dmitry Chernyak | After the links arise, run the following comand on the each node (you can do it in parallel or sequental): |
153 | 32 | Dmitry Chernyak | <pre> |
154 | 32 | Dmitry Chernyak | sci-setup backbone |
155 | 32 | Dmitry Chernyak | </pre> |
156 | 32 | Dmitry Chernyak | |
157 | 35 | Dmitry Chernyak | The result: |
158 | 32 | Dmitry Chernyak | <pre> |
159 | 32 | Dmitry Chernyak | root@gnt-1:~# sci-setup backbone |
160 | 32 | Dmitry Chernyak | Node number: 1 |
161 | 32 | Dmitry Chernyak | LAN interface: eth0 |
162 | 32 | Dmitry Chernyak | Waiting 30 seconds for links to be up |
163 | 32 | Dmitry Chernyak | Backbone interface: eth3 |
164 | 32 | Dmitry Chernyak | Up and test backbone |
165 | 32 | Dmitry Chernyak | |
166 | 32 | Dmitry Chernyak | Waiting for backbone to get ready (MAXWAIT is 2 seconds). |
167 | 32 | Dmitry Chernyak | inet addr:10.101.200.11 Bcast:10.101.200.255 Mask:255.255.255.0 |
168 | 32 | Dmitry Chernyak | ok. |
169 | 32 | Dmitry Chernyak | </pre> |
170 | 32 | Dmitry Chernyak | |
171 | 32 | Dmitry Chernyak | |
172 | 35 | Dmitry Chernyak | After setting up it on both nodes, check the link. Run the following command on the first node: |
173 | 32 | Dmitry Chernyak | <pre> |
174 | 32 | Dmitry Chernyak | ping 10.100.200.12 |
175 | 32 | Dmitry Chernyak | </pre> |
176 | 32 | Dmitry Chernyak | |
177 | 32 | Dmitry Chernyak | <pre> |
178 | 32 | Dmitry Chernyak | root@gnt-1:~# ping 10.101.200.12 |
179 | 32 | Dmitry Chernyak | PING 10.101.200.12 (10.101.200.12) 56(84) bytes of data. |
180 | 32 | Dmitry Chernyak | 64 bytes from 10.101.200.12: icmp_req=1 ttl=64 time=0.263 ms |
181 | 32 | Dmitry Chernyak | 64 bytes from 10.101.200.12: icmp_req=2 ttl=64 time=0.112 ms |
182 | 32 | Dmitry Chernyak | ^C |
183 | 32 | Dmitry Chernyak | --- 10.101.200.12 ping statistics --- |
184 | 32 | Dmitry Chernyak | 2 packets transmitted, 2 received, 0% packet loss, time 999ms |
185 | 32 | Dmitry Chernyak | rtt min/avg/max/mdev = 0.112/0.187/0.263/0.076 ms |
186 | 32 | Dmitry Chernyak | </pre> |
187 | 32 | Dmitry Chernyak | |
188 | 36 | Dmitry Chernyak | h2. Intialize the cluster |
189 | 32 | Dmitry Chernyak | |
190 | 36 | Dmitry Chernyak | On the first node run |
191 | 32 | Dmitry Chernyak | <pre> |
192 | 32 | Dmitry Chernyak | sci-setup cluster |
193 | 32 | Dmitry Chernyak | </pre> |
194 | 32 | Dmitry Chernyak | |
195 | 36 | Dmitry Chernyak | The configuration wizard will ask for the address of a cluster in the LAN: |
196 | 32 | Dmitry Chernyak | <pre> |
197 | 32 | Dmitry Chernyak | root@gnt-1:~# sci-setup cluster |
198 | 32 | Dmitry Chernyak | Cluster domain name will be gnt. |
199 | 32 | Dmitry Chernyak | Cluster IP will be 10.101.200.10 on the interlink. |
200 | 32 | Dmitry Chernyak | We recommend to set it to some unbound LAN IP address, |
201 | 32 | Dmitry Chernyak | but it is safe to simply press ENTER. |
202 | 32 | Dmitry Chernyak | Set cluster IP [10.101.200.10]: |
203 | 32 | Dmitry Chernyak | </pre> |
204 | 32 | Dmitry Chernyak | |
205 | 36 | Dmitry Chernyak | If at this moment you will specify any free static IP-address in the LAN, then in the future you will be able to connect the cluster controlling module via this IP. |
206 | 36 | Dmitry Chernyak | It is useful, but not obligate and not influent on your ability to control the cluuster. You can simply press ENTER and the controlling module will take the address from the backbone. |
207 | 32 | Dmitry Chernyak | |
208 | 36 | Dmitry Chernyak | Configuration wizard will check ping to the second node and will ask you to accept it's ssh key and to enter the root password to it to retrieve and check it's configuration parameters. |
209 | 32 | Dmitry Chernyak | <pre> |
210 | 32 | Dmitry Chernyak | Connecting to Node2 via 10.101.200.12 |
211 | 32 | Dmitry Chernyak | You will be prompted for a root password... |
212 | 32 | Dmitry Chernyak | |
213 | 32 | Dmitry Chernyak | The authenticity of host '10.101.200.12 (10.101.200.12)' can't be established. |
214 | 32 | Dmitry Chernyak | ECDSA key fingerprint is 6a:5a:78:fa:af:c1:23:97:87:9f:66:46:94:7e:f2:f5. |
215 | 32 | Dmitry Chernyak | Are you sure you want to continue connecting (yes/no)? |
216 | 32 | Dmitry Chernyak | </pre> |
217 | 32 | Dmitry Chernyak | |
218 | 36 | Dmitry Chernyak | Enter "yes" |
219 | 32 | Dmitry Chernyak | <pre> |
220 | 32 | Dmitry Chernyak | root@10.101.200.12's password: |
221 | 32 | Dmitry Chernyak | </pre> |
222 | 32 | Dmitry Chernyak | |
223 | 36 | Dmitry Chernyak | The password to the second node. |
224 | 32 | Dmitry Chernyak | |
225 | 37 | Dmitry Chernyak | After all checks will succeed, the wizard will print the cluster's configuration parameters: |
226 | 32 | Dmitry Chernyak | <pre> |
227 | 32 | Dmitry Chernyak | ######################################## |
228 | 32 | Dmitry Chernyak | Parameters detected: |
229 | 32 | Dmitry Chernyak | Domain name: example.sci |
230 | 32 | Dmitry Chernyak | |
231 | 32 | Dmitry Chernyak | Master network interface: backbone |
232 | 32 | Dmitry Chernyak | |
233 | 32 | Dmitry Chernyak | Cluster name: gnt |
234 | 32 | Dmitry Chernyak | Cluster IP: 10.101.200.10 |
235 | 32 | Dmitry Chernyak | |
236 | 32 | Dmitry Chernyak | Node 1 name: gnt-1 |
237 | 32 | Dmitry Chernyak | Node 1 IP: 10.101.200.11 |
238 | 32 | Dmitry Chernyak | Node 1 LAN IP: 192.168.11.28 |
239 | 32 | Dmitry Chernyak | |
240 | 32 | Dmitry Chernyak | Node 2 name: gnt-2 |
241 | 32 | Dmitry Chernyak | Node 2 IP: 10.101.200.12 |
242 | 32 | Dmitry Chernyak | Node 2 LAN IP: 192.168.11.29 |
243 | 32 | Dmitry Chernyak | Proceed with cluster creation [y/n]? |
244 | 32 | Dmitry Chernyak | </pre> |
245 | 32 | Dmitry Chernyak | |
246 | 36 | Dmitry Chernyak | If all is right, enter "y" and press ENTER to create a cluster. |
247 | 32 | Dmitry Chernyak | |
248 | 32 | Dmitry Chernyak | <pre> |
249 | 32 | Dmitry Chernyak | Refilling sci.conf |
250 | 32 | Dmitry Chernyak | Creating empty /root/.ssh |
251 | 32 | Dmitry Chernyak | Fullfilling /etc/hosts |
252 | 32 | Dmitry Chernyak | Fulfilling default /etc/ganeti/networks |
253 | 32 | Dmitry Chernyak | Set random vnc password for cluster: miotaigh |
254 | 32 | Dmitry Chernyak | add sci repo in apt sources |
255 | 32 | Dmitry Chernyak | Initializing cluster |
256 | 32 | Dmitry Chernyak | Tuning cluster |
257 | 32 | Dmitry Chernyak | Adding the second node |
258 | 32 | Dmitry Chernyak | -- WARNING -- |
259 | 32 | Dmitry Chernyak | Performing this operation is going to replace the ssh daemon keypair |
260 | 32 | Dmitry Chernyak | on the target machine (gnt-2.example.sci) with the ones of the current one |
261 | 32 | Dmitry Chernyak | and grant full intra-cluster ssh root access to/from it |
262 | 32 | Dmitry Chernyak | |
263 | 32 | Dmitry Chernyak | The authenticity of host 'gnt-2.example.sci (10.101.200.12)' can't be established. |
264 | 32 | Dmitry Chernyak | ECDSA key fingerprint is 6a:5a:78:fa:af:c1:23:97:87:9f:66:46:94:7e:f2:f5. |
265 | 32 | Dmitry Chernyak | Are you sure you want to continue connecting (yes/no)? |
266 | 32 | Dmitry Chernyak | </pre> |
267 | 32 | Dmitry Chernyak | |
268 | 32 | Dmitry Chernyak | |
269 | 36 | Dmitry Chernyak | During the process of adding the second node into a cluster, you will be asked again to accept it's ssh key and to enter the root password. |
270 | 36 | Dmitry Chernyak | Enter "yes" and then the password to the second node. |
271 | 32 | Dmitry Chernyak | |
272 | 32 | Dmitry Chernyak | |
273 | 36 | Dmitry Chernyak | At the end you will see the output of a cluster disgnostic command: |
274 | 32 | Dmitry Chernyak | <pre> |
275 | 32 | Dmitry Chernyak | VTue Jun 28 18:37:06 2016 * Verifying cluster config |
276 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:06 2016 * Verifying cluster certificate files |
277 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:06 2016 * Verifying hypervisor parameters |
278 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:07 2016 * Verifying all nodes belong to an existing group |
279 | 32 | Dmitry Chernyak | Waiting for job 10 ... |
280 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:07 2016 * Verifying group 'default' |
281 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:08 2016 * Gathering data (2 nodes) |
282 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Gathering disk information (2 nodes) |
283 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Verifying configuration file consistency |
284 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Verifying node status |
285 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Verifying instance status |
286 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Verifying orphan volumes |
287 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Verifying N+1 Memory redundancy |
288 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Other Notes |
289 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Hooks Results |
290 | 32 | Dmitry Chernyak | Node DTotal DFree MTotal MNode MFree Pinst Sinst |
291 | 32 | Dmitry Chernyak | gnt-1.example.sci 101.2G 82.2G 3.9G 1.5G 2.4G 0 0 |
292 | 32 | Dmitry Chernyak | gnt-2.example.sci 101.3G 81.3G 3.9G 1.5G 2.4G 0 0 |
293 | 32 | Dmitry Chernyak | If all is ok, proceed with sci-setup sci |
294 | 32 | Dmitry Chernyak | </pre> |
295 | 32 | Dmitry Chernyak | |
296 | 37 | Dmitry Chernyak | h2. Create the controlling virtual machine |
297 | 32 | Dmitry Chernyak | |
298 | 37 | Dmitry Chernyak | On the first node run |
299 | 32 | Dmitry Chernyak | <pre> |
300 | 32 | Dmitry Chernyak | sci-setup sci |
301 | 32 | Dmitry Chernyak | </pre> |
302 | 32 | Dmitry Chernyak | |
303 | 37 | Dmitry Chernyak | If you wish that the internal cluster's DNS will use your company's DNS as the forwarders (i.e. ask them to resolve external Intternet addresses), then run ttis command in the following way: |
304 | 32 | Dmitry Chernyak | <pre> |
305 | 32 | Dmitry Chernyak | sci-setup sci -d |
306 | 32 | Dmitry Chernyak | </pre> |
307 | 32 | Dmitry Chernyak | |
308 | 37 | Dmitry Chernyak | Without @-d@ the internal cluster's DNS will resolve the internet addresses directly via root Internet servers. |
309 | 32 | Dmitry Chernyak | |
310 | 37 | Dmitry Chernyak | The configuration wizard will ask you to specify the address of controlling virtual machime in the LAN: |
311 | 32 | Dmitry Chernyak | <pre> |
312 | 32 | Dmitry Chernyak | root@gnt-1:~# sci-setup sci |
313 | 32 | Dmitry Chernyak | Set sci LAN IP or enter "none" and press ENTER: |
314 | 32 | Dmitry Chernyak | </pre> |
315 | 32 | Dmitry Chernyak | |
316 | 37 | Dmitry Chernyak | Specify any free static IP in the LAN (192.168.11.2 in the example above). |
317 | 32 | Dmitry Chernyak | |
318 | 37 | Dmitry Chernyak | After all checks will succeed, the wizard will print the controlling VM configuration parameters: |
319 | 32 | Dmitry Chernyak | <pre> |
320 | 32 | Dmitry Chernyak | Creating service machine sci |
321 | 32 | Dmitry Chernyak | IP: 10.101.200.2 on backbone |
322 | 32 | Dmitry Chernyak | Second network device: lan |
323 | 32 | Dmitry Chernyak | Second network IP: 192.168.11.2 |
324 | 32 | Dmitry Chernyak | Proceed with sci VM creation [y/n]? |
325 | 32 | Dmitry Chernyak | </pre> |
326 | 32 | Dmitry Chernyak | |
327 | 37 | Dmitry Chernyak | If all is right, enter "y" and press ENTER to create VM. |
328 | 32 | Dmitry Chernyak | <pre> |
329 | 32 | Dmitry Chernyak | Adding sci to /etc/hosts |
330 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:02 2016 * creating instance disks... |
331 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:09 2016 adding instance sci to cluster config |
332 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:13 2016 - INFO: Waiting for instance sci to sync disks |
333 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:13 2016 - INFO: - device disk/0: 2.10% done, 2m 27s remaining (estimated) |
334 | 32 | Dmitry Chernyak | Tue Jun 28 18:45:13 2016 - INFO: - device disk/0: 39.90% done, 1m 31s remaining (estimated) |
335 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:14 2016 - INFO: - device disk/0: 78.20% done, 34s remaining (estimated) |
336 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:48 2016 - INFO: - device disk/0: 100.00% done, 0s remaining (estimated) |
337 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:48 2016 - INFO: Instance sci's disks are in sync |
338 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:48 2016 * running the instance OS create scripts... |
339 | 32 | Dmitry Chernyak | Tue Jun 28 18:49:42 2016 * starting instance... |
340 | 32 | Dmitry Chernyak | </pre> |
341 | 32 | Dmitry Chernyak | |
342 | 37 | Dmitry Chernyak | h2. Congratulations! You have just create the first virtual machine in your cluster! |
343 | 32 | Dmitry Chernyak | |
344 | 37 | Dmitry Chernyak | After statting, sci VM automatically issue the finetuning procedures of a cluster nodes and becomes the DNS server for them. This operation takes about 5-10 minutes. |
345 | 23 | Dmitry Chernyak | |
346 | 37 | Dmitry Chernyak | Try following commands: |
347 | 23 | Dmitry Chernyak | <pre> |
348 | 23 | Dmitry Chernyak | gnt-instance list |
349 | 23 | Dmitry Chernyak | gnt-instance info sci |
350 | 23 | Dmitry Chernyak | gnt-cluster verify |
351 | 23 | Dmitry Chernyak | ssh sci |
352 | 25 | Dmitry Chernyak | </pre> |
353 | 23 | Dmitry Chernyak | |
354 | 37 | Dmitry Chernyak | h2. Operation |
355 | 23 | Dmitry Chernyak | |
356 | 37 | Dmitry Chernyak | How to control the cluster and how to create the new virtual machines read [[OPERATIONS]] |
357 | 23 | Dmitry Chernyak | |
358 | 1 | Dmitry Chernyak | ---- |
359 | 31 | Dmitry Chernyak | |
360 | 31 | Dmitry Chernyak | [[SETUP for versions 2.3 and earlier]] |