INSTALL » История » Версия 39
Владимир Ипатов, 03.03.2018 18:51
1 | 1 | Dmitry Chernyak | h1. INSTALL |
---|---|---|---|
2 | 1 | Dmitry Chernyak | |
3 | 1 | Dmitry Chernyak | {{toc}} |
4 | 1 | Dmitry Chernyak | |
5 | 38 | Dmitry Chernyak | [[OVERVIEW]] | [[INSTALL]] | [[OPERATIONS]] | [[LICENSE]] |
6 | 38 | Dmitry Chernyak | [[ОБЗОР]] | [[УСТАНОВКА]] | [[ОПЕРАЦИИ]] | [[ЛИЦЕНЗИЯ]] |
7 | 1 | Dmitry Chernyak | |
8 | 20 | Dmitry Chernyak | h2. Get the ISO image |
9 | 4 | Владимир Ипатов | |
10 | 30 | Dmitry Chernyak | Download the distribution disk, ready to install: *"Download ISO-image":https://sci.skycover.ru/projects/sci-cd/documents* |
11 | 30 | Dmitry Chernyak | To access it you should "register":https://sci.skycover.ru/account/register |
12 | 1 | Dmitry Chernyak | |
13 | 32 | Dmitry Chernyak | h2. Burn the ISO-image on the disk or prepare a bootable flash-drive |
14 | 32 | Dmitry Chernyak | |
15 | 39 | Владимир Ипатов | h3. Windows |
16 | 39 | Владимир Ипатов | |
17 | 32 | Dmitry Chernyak | You can burn the ISO-image using writeable CD-ROM and any available program for burning disks. |
18 | 32 | Dmitry Chernyak | |
19 | 39 | Владимир Ипатов | You can also prepare a bootable flash using program Rufus: |
20 | 39 | Владимир Ипатов | In rufus interface choose in "Device" your flash drive. |
21 | 39 | Владимир Ипатов | Create a bootable disk using : DD-image and tap on CD-ROM icon and choose ISO file. |
22 | 39 | Владимир Ипатов | |
23 | 39 | Владимир Ипатов | h3. Linux |
24 | 39 | Владимир Ипатов | |
25 | 39 | Владимир Ипатов | You can burn the ISO-image using writeable CD-ROM and any available program for burning disks. |
26 | 39 | Владимир Ипатов | |
27 | 32 | Dmitry Chernyak | You can prepare a bootable flash-drive. For this use any available tool, for example, unetbootin. |
28 | 32 | Dmitry Chernyak | When it will ask for "type of a system", you should set Debian of version ... and set the path to the ISO-image file. |
29 | 32 | Dmitry Chernyak | |
30 | 32 | Dmitry Chernyak | You can also th write ISO-image directly to flash-drive: |
31 | 32 | Dmitry Chernyak | |
32 | 32 | Dmitry Chernyak | <pre> |
33 | 32 | Dmitry Chernyak | dd if=/path/to/iso of=/dev/sdX bs=4k |
34 | 32 | Dmitry Chernyak | </pre> |
35 | 32 | Dmitry Chernyak | Where /dev/sdX - path to the block device file. which is pointing to a flash. |
36 | 32 | Dmitry Chernyak | (To find which block device you need, insert the flash-drive and run "dmesg" in terminal - at the tail of the output you will see the information about the flash-drive plugged in). |
37 | 32 | Dmitry Chernyak | . |
38 | 32 | Dmitry Chernyak | |
39 | 32 | Dmitry Chernyak | h2. Minimal System Requirements |
40 | 32 | Dmitry Chernyak | |
41 | 32 | Dmitry Chernyak | In real case you must choose hardware in according to real objectives. |
42 | 1 | Dmitry Chernyak | For testing purposes minimal system requierements is: |
43 | 39 | Владимир Ипатов | * 4 GB RAM |
44 | 32 | Dmitry Chernyak | * 50GB HDD |
45 | 32 | Dmitry Chernyak | * 1Gbit ethernet for DRBD link (if you setup 2 or more nodes) |
46 | 32 | Dmitry Chernyak | * Hardware Virtualization support: Intel VT or AMD CPU (if you want to run the Windows instances) |
47 | 32 | Dmitry Chernyak | |
48 | 1 | Dmitry Chernyak | For enterprise use both nodes mush have the same configuration (CPU power, RAM, volume and speed of disk system). |
49 | 1 | Dmitry Chernyak | |
50 | 39 | Владимир Ипатов | h1. System install |
51 | 1 | Dmitry Chernyak | |
52 | 39 | Владимир Ипатов | h2. First node setup |
53 | 39 | Владимир Ипатов | |
54 | 39 | Владимир Ипатов | Before setting up, link nodes to LAN with a cable. *Detach all other network cables during setup.* |
55 | 1 | Dmitry Chernyak | Next boot the first node from installation image. |
56 | 39 | Владимир Ипатов | In welcome scrin choose Install. |
57 | 1 | Dmitry Chernyak | |
58 | 39 | Владимир Ипатов | {{thumbnail(install.png, size=600)}} |
59 | 1 | Dmitry Chernyak | |
60 | 39 | Владимир Ипатов | Specify a static IP address for the LAN connection (you can use any free ip from your LAN. In our case it is 192.168.13.11. Tap <Continue> |
61 | 1 | Dmitry Chernyak | |
62 | 39 | Владимир Ипатов | {{thumbnail(ip_1node.png, size=600)}} |
63 | 1 | Dmitry Chernyak | |
64 | 39 | Владимир Ипатов | Next specify the network mask and tap <Continue>. |
65 | 1 | Dmitry Chernyak | |
66 | 39 | Владимир Ипатов | {{thumbnail(netmask.png, size=600)}} |
67 | 1 | Dmitry Chernyak | |
68 | 39 | Владимир Ипатов | Input your gateway address and tap <Continue> |
69 | 39 | Владимир Ипатов | |
70 | 39 | Владимир Ипатов | {{thumbnail(gateway.png, size=600)}} |
71 | 39 | Владимир Ипатов | |
72 | 39 | Владимир Ипатов | Input your name server address and tap <Continue> |
73 | 39 | Владимир Ипатов | |
74 | 39 | Владимир Ипатов | |
75 | 39 | Владимир Ипатов | After address setting go to node naming. |
76 | 39 | Владимир Ипатов | In SCI nodes names are gnt# (e.g. gnt1, gnt2, etc.) |
77 | 39 | Владимир Ипатов | *NOTICE: Nodes names MUST ending with number on the end, starting from 1.* |
78 | 39 | Владимир Ипатов | |
79 | 39 | Владимир Ипатов | In the Hostname field specify the node's name, gnt1 or gnt-1 |
80 | 32 | Dmitry Chernyak | "1" means this will be the first node of a cluster. |
81 | 32 | Dmitry Chernyak | |
82 | 39 | Владимир Ипатов | {{thumbnail(hostname_1.png, size=600)}} |
83 | 32 | Dmitry Chernyak | |
84 | 39 | Владимир Ипатов | Specify domain name in the Domain field. If you have domain in LAN, specify it, if not, we recommend to set it to "sci" |
85 | 32 | Dmitry Chernyak | |
86 | 33 | Dmitry Chernyak | !domain.JPG! |
87 | 32 | Dmitry Chernyak | |
88 | 39 | Владимир Ипатов | Next specify root password twice |
89 | 32 | Dmitry Chernyak | Do not use too weak password! |
90 | 34 | Dmitry Chernyak | |
91 | 39 | Владимир Ипатов | {{thumbnail(root_pass.png, size=600)}} |
92 | 1 | Dmitry Chernyak | |
93 | 39 | Владимир Ипатов | After setting root password we go to hard disks partitioning. |
94 | 33 | Dmitry Chernyak | The installer will present several types of automatic partitioning. |
95 | 1 | Dmitry Chernyak | |
96 | 1 | Dmitry Chernyak | If you want to use Software RAID, choose |
97 | 33 | Dmitry Chernyak | *2(4,6,8) disk with lvm* - in relation to number of hard disks. |
98 | 1 | Dmitry Chernyak | For two disks RAID1 will be used, for more disks RAID10 will be used. |
99 | 1 | Dmitry Chernyak | RAID10 - the recommended RAID level to use along with virtualization. |
100 | 1 | Dmitry Chernyak | |
101 | 1 | Dmitry Chernyak | If you use hardware RAID, choose |
102 | 1 | Dmitry Chernyak | *1 disk with lvm* |
103 | 1 | Dmitry Chernyak | |
104 | 1 | Dmitry Chernyak | If you have a server with two types of disks, for example, 2 SATA drives and 8 SAS drives (bare or under hardware RAID), we suggesting to do a initial setup at whole on the SATA drives with "*2 disk with lvm*" template, and after initialization of a cluster, add SAS disks manually as additional LVM VG. |
105 | 33 | Dmitry Chernyak | This procedure is described in [[OPERATIONS]]. |
106 | 32 | Dmitry Chernyak | |
107 | 39 | Владимир Ипатов | {{thumbnail(diskpart.png, size=600)}} |
108 | 32 | Dmitry Chernyak | |
109 | 39 | Владимир Ипатов | If you encounter the questions about deleting the old partitions and abщut software RAID creation, then confirm this. |
110 | 33 | Dmitry Chernyak | |
111 | 39 | Владимир Ипатов | {{thumbnail(diskpart_yes.png, size=600)}} |
112 | 32 | Dmitry Chernyak | |
113 | 39 | Владимир Ипатов | After some packages install, you must choose disks to install GRUB bootloader |
114 | 39 | Владимир Ипатов | |
115 | 33 | Dmitry Chernyak | Check all underlying physical drives (not partitions and not Software RAID volumes!), where the system was installed. |
116 | 32 | Dmitry Chernyak | |
117 | 39 | Владимир Ипатов | {{thumbnail(boot_hdd.png, size=600)}} |
118 | 1 | Dmitry Chernyak | |
119 | 39 | Владимир Ипатов | After bootloader install system will install rest of packages and finishes the install. |
120 | 32 | Dmitry Chernyak | |
121 | 39 | Владимир Ипатов | {{thumbnail(install_coplete.png, size=600)}} |
122 | 39 | Владимир Ипатов | |
123 | 39 | Владимир Ипатов | Eject install media and tap <Continue>. The system will be reboot in installed OS. |
124 | 39 | Владимир Ипатов | |
125 | 39 | Владимир Ипатов | h2. Second node setup |
126 | 32 | Dmitry Chernyak | |
127 | 34 | Dmitry Chernyak | Specify the node's name, for example, gnt2 or gnt-2 (like the first node). |
128 | 34 | Dmitry Chernyak | Spesify the same root password as on the first node. |
129 | 32 | Dmitry Chernyak | |
130 | 34 | Dmitry Chernyak | h2. Setup the time |
131 | 32 | Dmitry Chernyak | |
132 | 34 | Dmitry Chernyak | Check that both nodes shows the same time. |
133 | 32 | Dmitry Chernyak | |
134 | 32 | Dmitry Chernyak | <pre> |
135 | 32 | Dmitry Chernyak | # date |
136 | 32 | Dmitry Chernyak | Thu Mar 12 12:23:10 MSK 2015 |
137 | 32 | Dmitry Chernyak | </pre> |
138 | 32 | Dmitry Chernyak | |
139 | 34 | Dmitry Chernyak | If not so, the set it with te following command: |
140 | 32 | Dmitry Chernyak | |
141 | 32 | Dmitry Chernyak | <pre> |
142 | 32 | Dmitry Chernyak | # date -s "12 MAR 2015 12:23:00" |
143 | 32 | Dmitry Chernyak | </pre> |
144 | 32 | Dmitry Chernyak | |
145 | 35 | Dmitry Chernyak | h2. Configure a backbone (the internal link between nodes) |
146 | 32 | Dmitry Chernyak | |
147 | 35 | Dmitry Chernyak | Do not plug off the nodes from the LAN. |
148 | 35 | Dmitry Chernyak | Link the nodes with a second cable via their free Gigabit network adapters, check that these adapters "link" LEDs are lit (if the LEDs are present). |
149 | 35 | Dmitry Chernyak | This interlink will be used for disk data sychronization and internal cluster communications. |
150 | 35 | Dmitry Chernyak | If some oter ethernet cables, other than backbone and LAN) are connected to the nodet at this moment, then you should plug them off for the time of backbone setup. |
151 | 35 | Dmitry Chernyak | After the links arise, run the following comand on the each node (you can do it in parallel or sequental): |
152 | 32 | Dmitry Chernyak | <pre> |
153 | 32 | Dmitry Chernyak | sci-setup backbone |
154 | 32 | Dmitry Chernyak | </pre> |
155 | 32 | Dmitry Chernyak | |
156 | 35 | Dmitry Chernyak | The result: |
157 | 32 | Dmitry Chernyak | <pre> |
158 | 32 | Dmitry Chernyak | root@gnt-1:~# sci-setup backbone |
159 | 32 | Dmitry Chernyak | Node number: 1 |
160 | 32 | Dmitry Chernyak | LAN interface: eth0 |
161 | 32 | Dmitry Chernyak | Waiting 30 seconds for links to be up |
162 | 32 | Dmitry Chernyak | Backbone interface: eth3 |
163 | 32 | Dmitry Chernyak | Up and test backbone |
164 | 32 | Dmitry Chernyak | |
165 | 32 | Dmitry Chernyak | Waiting for backbone to get ready (MAXWAIT is 2 seconds). |
166 | 32 | Dmitry Chernyak | inet addr:10.101.200.11 Bcast:10.101.200.255 Mask:255.255.255.0 |
167 | 32 | Dmitry Chernyak | ok. |
168 | 32 | Dmitry Chernyak | </pre> |
169 | 32 | Dmitry Chernyak | |
170 | 32 | Dmitry Chernyak | |
171 | 35 | Dmitry Chernyak | After setting up it on both nodes, check the link. Run the following command on the first node: |
172 | 32 | Dmitry Chernyak | <pre> |
173 | 32 | Dmitry Chernyak | ping 10.100.200.12 |
174 | 32 | Dmitry Chernyak | </pre> |
175 | 32 | Dmitry Chernyak | |
176 | 32 | Dmitry Chernyak | <pre> |
177 | 32 | Dmitry Chernyak | root@gnt-1:~# ping 10.101.200.12 |
178 | 32 | Dmitry Chernyak | PING 10.101.200.12 (10.101.200.12) 56(84) bytes of data. |
179 | 32 | Dmitry Chernyak | 64 bytes from 10.101.200.12: icmp_req=1 ttl=64 time=0.263 ms |
180 | 32 | Dmitry Chernyak | 64 bytes from 10.101.200.12: icmp_req=2 ttl=64 time=0.112 ms |
181 | 32 | Dmitry Chernyak | ^C |
182 | 32 | Dmitry Chernyak | --- 10.101.200.12 ping statistics --- |
183 | 32 | Dmitry Chernyak | 2 packets transmitted, 2 received, 0% packet loss, time 999ms |
184 | 32 | Dmitry Chernyak | rtt min/avg/max/mdev = 0.112/0.187/0.263/0.076 ms |
185 | 32 | Dmitry Chernyak | </pre> |
186 | 32 | Dmitry Chernyak | |
187 | 36 | Dmitry Chernyak | h2. Intialize the cluster |
188 | 32 | Dmitry Chernyak | |
189 | 36 | Dmitry Chernyak | On the first node run |
190 | 32 | Dmitry Chernyak | <pre> |
191 | 32 | Dmitry Chernyak | sci-setup cluster |
192 | 32 | Dmitry Chernyak | </pre> |
193 | 32 | Dmitry Chernyak | |
194 | 36 | Dmitry Chernyak | The configuration wizard will ask for the address of a cluster in the LAN: |
195 | 32 | Dmitry Chernyak | <pre> |
196 | 32 | Dmitry Chernyak | root@gnt-1:~# sci-setup cluster |
197 | 32 | Dmitry Chernyak | Cluster domain name will be gnt. |
198 | 32 | Dmitry Chernyak | Cluster IP will be 10.101.200.10 on the interlink. |
199 | 32 | Dmitry Chernyak | We recommend to set it to some unbound LAN IP address, |
200 | 32 | Dmitry Chernyak | but it is safe to simply press ENTER. |
201 | 32 | Dmitry Chernyak | Set cluster IP [10.101.200.10]: |
202 | 32 | Dmitry Chernyak | </pre> |
203 | 32 | Dmitry Chernyak | |
204 | 36 | Dmitry Chernyak | If at this moment you will specify any free static IP-address in the LAN, then in the future you will be able to connect the cluster controlling module via this IP. |
205 | 36 | Dmitry Chernyak | It is useful, but not obligate and not influent on your ability to control the cluuster. You can simply press ENTER and the controlling module will take the address from the backbone. |
206 | 32 | Dmitry Chernyak | |
207 | 36 | Dmitry Chernyak | Configuration wizard will check ping to the second node and will ask you to accept it's ssh key and to enter the root password to it to retrieve and check it's configuration parameters. |
208 | 32 | Dmitry Chernyak | <pre> |
209 | 32 | Dmitry Chernyak | Connecting to Node2 via 10.101.200.12 |
210 | 32 | Dmitry Chernyak | You will be prompted for a root password... |
211 | 32 | Dmitry Chernyak | |
212 | 32 | Dmitry Chernyak | The authenticity of host '10.101.200.12 (10.101.200.12)' can't be established. |
213 | 32 | Dmitry Chernyak | ECDSA key fingerprint is 6a:5a:78:fa:af:c1:23:97:87:9f:66:46:94:7e:f2:f5. |
214 | 32 | Dmitry Chernyak | Are you sure you want to continue connecting (yes/no)? |
215 | 32 | Dmitry Chernyak | </pre> |
216 | 32 | Dmitry Chernyak | |
217 | 36 | Dmitry Chernyak | Enter "yes" |
218 | 32 | Dmitry Chernyak | <pre> |
219 | 32 | Dmitry Chernyak | root@10.101.200.12's password: |
220 | 32 | Dmitry Chernyak | </pre> |
221 | 32 | Dmitry Chernyak | |
222 | 36 | Dmitry Chernyak | The password to the second node. |
223 | 32 | Dmitry Chernyak | |
224 | 37 | Dmitry Chernyak | After all checks will succeed, the wizard will print the cluster's configuration parameters: |
225 | 32 | Dmitry Chernyak | <pre> |
226 | 32 | Dmitry Chernyak | ######################################## |
227 | 32 | Dmitry Chernyak | Parameters detected: |
228 | 32 | Dmitry Chernyak | Domain name: example.sci |
229 | 32 | Dmitry Chernyak | |
230 | 32 | Dmitry Chernyak | Master network interface: backbone |
231 | 32 | Dmitry Chernyak | |
232 | 32 | Dmitry Chernyak | Cluster name: gnt |
233 | 32 | Dmitry Chernyak | Cluster IP: 10.101.200.10 |
234 | 32 | Dmitry Chernyak | |
235 | 32 | Dmitry Chernyak | Node 1 name: gnt-1 |
236 | 32 | Dmitry Chernyak | Node 1 IP: 10.101.200.11 |
237 | 32 | Dmitry Chernyak | Node 1 LAN IP: 192.168.11.28 |
238 | 32 | Dmitry Chernyak | |
239 | 32 | Dmitry Chernyak | Node 2 name: gnt-2 |
240 | 32 | Dmitry Chernyak | Node 2 IP: 10.101.200.12 |
241 | 32 | Dmitry Chernyak | Node 2 LAN IP: 192.168.11.29 |
242 | 32 | Dmitry Chernyak | Proceed with cluster creation [y/n]? |
243 | 32 | Dmitry Chernyak | </pre> |
244 | 32 | Dmitry Chernyak | |
245 | 36 | Dmitry Chernyak | If all is right, enter "y" and press ENTER to create a cluster. |
246 | 32 | Dmitry Chernyak | |
247 | 32 | Dmitry Chernyak | <pre> |
248 | 32 | Dmitry Chernyak | Refilling sci.conf |
249 | 32 | Dmitry Chernyak | Creating empty /root/.ssh |
250 | 32 | Dmitry Chernyak | Fullfilling /etc/hosts |
251 | 32 | Dmitry Chernyak | Fulfilling default /etc/ganeti/networks |
252 | 32 | Dmitry Chernyak | Set random vnc password for cluster: miotaigh |
253 | 32 | Dmitry Chernyak | add sci repo in apt sources |
254 | 32 | Dmitry Chernyak | Initializing cluster |
255 | 32 | Dmitry Chernyak | Tuning cluster |
256 | 32 | Dmitry Chernyak | Adding the second node |
257 | 32 | Dmitry Chernyak | -- WARNING -- |
258 | 32 | Dmitry Chernyak | Performing this operation is going to replace the ssh daemon keypair |
259 | 32 | Dmitry Chernyak | on the target machine (gnt-2.example.sci) with the ones of the current one |
260 | 32 | Dmitry Chernyak | and grant full intra-cluster ssh root access to/from it |
261 | 32 | Dmitry Chernyak | |
262 | 32 | Dmitry Chernyak | The authenticity of host 'gnt-2.example.sci (10.101.200.12)' can't be established. |
263 | 32 | Dmitry Chernyak | ECDSA key fingerprint is 6a:5a:78:fa:af:c1:23:97:87:9f:66:46:94:7e:f2:f5. |
264 | 32 | Dmitry Chernyak | Are you sure you want to continue connecting (yes/no)? |
265 | 32 | Dmitry Chernyak | </pre> |
266 | 32 | Dmitry Chernyak | |
267 | 32 | Dmitry Chernyak | |
268 | 36 | Dmitry Chernyak | During the process of adding the second node into a cluster, you will be asked again to accept it's ssh key and to enter the root password. |
269 | 36 | Dmitry Chernyak | Enter "yes" and then the password to the second node. |
270 | 32 | Dmitry Chernyak | |
271 | 32 | Dmitry Chernyak | |
272 | 36 | Dmitry Chernyak | At the end you will see the output of a cluster disgnostic command: |
273 | 32 | Dmitry Chernyak | <pre> |
274 | 32 | Dmitry Chernyak | VTue Jun 28 18:37:06 2016 * Verifying cluster config |
275 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:06 2016 * Verifying cluster certificate files |
276 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:06 2016 * Verifying hypervisor parameters |
277 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:07 2016 * Verifying all nodes belong to an existing group |
278 | 32 | Dmitry Chernyak | Waiting for job 10 ... |
279 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:07 2016 * Verifying group 'default' |
280 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:08 2016 * Gathering data (2 nodes) |
281 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Gathering disk information (2 nodes) |
282 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Verifying configuration file consistency |
283 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Verifying node status |
284 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:09 2016 * Verifying instance status |
285 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Verifying orphan volumes |
286 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Verifying N+1 Memory redundancy |
287 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Other Notes |
288 | 32 | Dmitry Chernyak | Tue Jun 28 18:37:10 2016 * Hooks Results |
289 | 32 | Dmitry Chernyak | Node DTotal DFree MTotal MNode MFree Pinst Sinst |
290 | 32 | Dmitry Chernyak | gnt-1.example.sci 101.2G 82.2G 3.9G 1.5G 2.4G 0 0 |
291 | 32 | Dmitry Chernyak | gnt-2.example.sci 101.3G 81.3G 3.9G 1.5G 2.4G 0 0 |
292 | 32 | Dmitry Chernyak | If all is ok, proceed with sci-setup sci |
293 | 32 | Dmitry Chernyak | </pre> |
294 | 32 | Dmitry Chernyak | |
295 | 37 | Dmitry Chernyak | h2. Create the controlling virtual machine |
296 | 32 | Dmitry Chernyak | |
297 | 37 | Dmitry Chernyak | On the first node run |
298 | 32 | Dmitry Chernyak | <pre> |
299 | 32 | Dmitry Chernyak | sci-setup sci |
300 | 32 | Dmitry Chernyak | </pre> |
301 | 32 | Dmitry Chernyak | |
302 | 37 | Dmitry Chernyak | If you wish that the internal cluster's DNS will use your company's DNS as the forwarders (i.e. ask them to resolve external Intternet addresses), then run ttis command in the following way: |
303 | 32 | Dmitry Chernyak | <pre> |
304 | 32 | Dmitry Chernyak | sci-setup sci -d |
305 | 32 | Dmitry Chernyak | </pre> |
306 | 32 | Dmitry Chernyak | |
307 | 37 | Dmitry Chernyak | Without @-d@ the internal cluster's DNS will resolve the internet addresses directly via root Internet servers. |
308 | 32 | Dmitry Chernyak | |
309 | 37 | Dmitry Chernyak | The configuration wizard will ask you to specify the address of controlling virtual machime in the LAN: |
310 | 32 | Dmitry Chernyak | <pre> |
311 | 32 | Dmitry Chernyak | root@gnt-1:~# sci-setup sci |
312 | 32 | Dmitry Chernyak | Set sci LAN IP or enter "none" and press ENTER: |
313 | 32 | Dmitry Chernyak | </pre> |
314 | 32 | Dmitry Chernyak | |
315 | 37 | Dmitry Chernyak | Specify any free static IP in the LAN (192.168.11.2 in the example above). |
316 | 32 | Dmitry Chernyak | |
317 | 37 | Dmitry Chernyak | After all checks will succeed, the wizard will print the controlling VM configuration parameters: |
318 | 32 | Dmitry Chernyak | <pre> |
319 | 32 | Dmitry Chernyak | Creating service machine sci |
320 | 32 | Dmitry Chernyak | IP: 10.101.200.2 on backbone |
321 | 32 | Dmitry Chernyak | Second network device: lan |
322 | 32 | Dmitry Chernyak | Second network IP: 192.168.11.2 |
323 | 32 | Dmitry Chernyak | Proceed with sci VM creation [y/n]? |
324 | 32 | Dmitry Chernyak | </pre> |
325 | 32 | Dmitry Chernyak | |
326 | 37 | Dmitry Chernyak | If all is right, enter "y" and press ENTER to create VM. |
327 | 32 | Dmitry Chernyak | <pre> |
328 | 32 | Dmitry Chernyak | Adding sci to /etc/hosts |
329 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:02 2016 * creating instance disks... |
330 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:09 2016 adding instance sci to cluster config |
331 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:13 2016 - INFO: Waiting for instance sci to sync disks |
332 | 32 | Dmitry Chernyak | Tue Jun 28 18:44:13 2016 - INFO: - device disk/0: 2.10% done, 2m 27s remaining (estimated) |
333 | 32 | Dmitry Chernyak | Tue Jun 28 18:45:13 2016 - INFO: - device disk/0: 39.90% done, 1m 31s remaining (estimated) |
334 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:14 2016 - INFO: - device disk/0: 78.20% done, 34s remaining (estimated) |
335 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:48 2016 - INFO: - device disk/0: 100.00% done, 0s remaining (estimated) |
336 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:48 2016 - INFO: Instance sci's disks are in sync |
337 | 32 | Dmitry Chernyak | Tue Jun 28 18:46:48 2016 * running the instance OS create scripts... |
338 | 32 | Dmitry Chernyak | Tue Jun 28 18:49:42 2016 * starting instance... |
339 | 32 | Dmitry Chernyak | </pre> |
340 | 32 | Dmitry Chernyak | |
341 | 37 | Dmitry Chernyak | h2. Congratulations! You have just create the first virtual machine in your cluster! |
342 | 32 | Dmitry Chernyak | |
343 | 37 | Dmitry Chernyak | After statting, sci VM automatically issue the finetuning procedures of a cluster nodes and becomes the DNS server for them. This operation takes about 5-10 minutes. |
344 | 23 | Dmitry Chernyak | |
345 | 37 | Dmitry Chernyak | Try following commands: |
346 | 23 | Dmitry Chernyak | <pre> |
347 | 23 | Dmitry Chernyak | gnt-instance list |
348 | 23 | Dmitry Chernyak | gnt-instance info sci |
349 | 23 | Dmitry Chernyak | gnt-cluster verify |
350 | 23 | Dmitry Chernyak | ssh sci |
351 | 25 | Dmitry Chernyak | </pre> |
352 | 23 | Dmitry Chernyak | |
353 | 37 | Dmitry Chernyak | h2. Operation |
354 | 23 | Dmitry Chernyak | |
355 | 37 | Dmitry Chernyak | How to control the cluster and how to create the new virtual machines read [[OPERATIONS]] |
356 | 23 | Dmitry Chernyak | |
357 | 1 | Dmitry Chernyak | ---- |
358 | 31 | Dmitry Chernyak | |
359 | 31 | Dmitry Chernyak | [[SETUP for versions 2.3 and earlier]] |