Xen Cluster Management Mit Ganeti Auf Debian Etch - Seite 9

16 A Failover Example

Lass uns annehmen, dass Du node2.example.com wegen Wartungsarbeiten runter fahren möchtest, aber inst1.example.com weiter laufen soll.

Lass uns zunächst etwas über unsere Instanzen in Erfahrung bringen:

node1:

gnt-instance list

Wie Du siehst, ist node2 das primäre System:

node1:~# gnt-instance list
Instance OS Primary_node Autostart Status Memory
inst1.example.com debian-etch node2.example.com yes running 64
node1:~#

Um inst1.example.com auf node1 umzuschalten (failover), lass wir folgenden Befehl laufen (wieder auf node1):

gnt-instance failover inst1.example.com

Danach lassen wir Folgendes erneut laufen

gnt-instance list

node1 sollte nun das primäre System sein:

node1:~# gnt-instance list
Instance OS Primary_node Autostart Status Memory
inst1.example.com debian-etch node1.example.com yes running 64
node1:~#

Nun kannst Du node2 runter fahren:

node2:

shutdown -h now

Nachdem node2 runter gefahren ist, kannst Du versuchen, Dich mit inst1.example.com zu verbinden - es sollte immer noch laufen.

Nachdem wir node2 gewartet und neu gestartet haben, möchten wir, dass node2 wieder der erste node wird.

Dafür versuchen wir wieder auf node1 umzuschalten (failover):

node1:

gnt-instance failover inst1.example.com

Dieses Mal erhalten wir dies:

node1:~# gnt-instance failover inst1.example.com
Failover will happen to image inst1.example.com. This requires a
shutdown of the instance. Continue?
y/[n]: <-- y
* checking disk consistency between source and target
Can't get any data from node node2.example.com
Failure: command execution error:
Disk sda is degraded on target node, aborting failover.
node1:~#

Das Umschalten funktioniert nicht, da die Festplatte von inst1.example.com auf node2 nicht mehr synchron läuft.

Um dies zu beheben, könnnen wir die Festplatte von inst1.example.com auf node2 ersetzen, indem wir die Festplatte vom derzeitigen primären node1 auf node2 spiegeln:

node1:

gnt-instance replace-disks -n node2.example.com inst1.example.com

Während dieses Vorgangs (der etwas dauern kann) kann inst1.example.com weiter laufen.

node1:~# gnt-instance replace-disks -n node2.example.com inst1.example.com
Waiting for instance inst1.example.com to sync disks.
- device sda: 0.47% done, 474386 estimated seconds remaining
- device sdb: 22.51% done, 593 estimated seconds remaining
- device sda: 0.68% done, 157798 estimated seconds remaining
- device sdb: 70.50% done, 242 estimated seconds remaining
- device sda: 0.87% done, 288736 estimated seconds remaining
- device sda: 0.98% done, 225709 estimated seconds remaining
- device sda: 1.10% done, 576135 estimated seconds remaining
- device sda: 1.22% done, 161835 estimated seconds remaining
- device sda: 1.32% done, 739075 estimated seconds remaining
- device sda: 1.53% done, 120064 estimated seconds remaining
- device sda: 1.71% done, 257668 estimated seconds remaining
- device sda: 1.84% done, 257310 estimated seconds remaining
- device sda: 3.43% done, 4831 estimated seconds remaining
- device sda: 6.56% done, 4774 estimated seconds remaining
- device sda: 8.74% done, 4700 estimated seconds remaining
- device sda: 11.20% done, 4595 estimated seconds remaining
- device sda: 13.49% done, 4554 estimated seconds remaining
- device sda: 15.57% done, 4087 estimated seconds remaining
- device sda: 17.49% done, 3758 estimated seconds remaining
- device sda: 19.82% done, 4166 estimated seconds remaining
- device sda: 22.11% done, 4075 estimated seconds remaining
- device sda: 23.94% done, 3651 estimated seconds remaining
- device sda: 26.69% done, 3945 estimated seconds remaining
- device sda: 29.06% done, 3745 estimated seconds remaining
- device sda: 31.07% done, 3567 estimated seconds remaining
- device sda: 33.41% done, 3498 estimated seconds remaining
- device sda: 35.77% done, 3364 estimated seconds remaining
- device sda: 38.05% done, 3274 estimated seconds remaining
- device sda: 41.17% done, 3109 estimated seconds remaining
- device sda: 44.11% done, 2974 estimated seconds remaining
- device sda: 46.21% done, 2655 estimated seconds remaining
- device sda: 48.40% done, 2696 estimated seconds remaining
- device sda: 50.84% done, 2635 estimated seconds remaining
- device sda: 53.33% done, 2449 estimated seconds remaining
- device sda: 55.75% done, 2362 estimated seconds remaining
- device sda: 58.73% done, 2172 estimated seconds remaining
- device sda: 60.91% done, 2015 estimated seconds remaining
- device sda: 63.16% done, 1914 estimated seconds remaining
- device sda: 65.41% done, 1760 estimated seconds remaining
- device sda: 68.15% done, 1681 estimated seconds remaining
- device sda: 70.61% done, 1562 estimated seconds remaining
- device sda: 73.55% done, 1370 estimated seconds remaining
- device sda: 76.01% done, 1269 estimated seconds remaining
- device sda: 78.14% done, 1108 estimated seconds remaining
- device sda: 80.59% done, 1011 estimated seconds remaining
- device sda: 82.86% done, 858 estimated seconds remaining
- device sda: 85.25% done, 674 estimated seconds remaining
- device sda: 87.74% done, 638 estimated seconds remaining
- device sda: 90.01% done, 518 estimated seconds remaining
- device sda: 92.40% done, 392 estimated seconds remaining
- device sda: 94.87% done, 265 estimated seconds remaining
- device sda: 97.10% done, 147 estimated seconds remaining
- device sda: 99.38% done, 30 estimated seconds remaining
Instance inst1.example.com's disks are in sync.
node1:~#

Danach können wir inst1.example.com auf node2 umschalten (failover):

gnt-instance failover inst1.example.com

node2 sollte nun wieder der erste node sein:

gnt-instance list

node1:~# gnt-instance list
Instance OS Primary_node Autostart Status Memory
inst1.example.com debian-etch node2.example.com yes running 64
node1:~#

17 Links

0 Kommentar(e)

Zum Posten von Kommentaren bitte