Aufsetzen von Software RAID1 auf einem bereits installierten System (Inkl. GRUB Konfiguration) (Mandriva 2008.0) - Seite 3

6 GRUB vorbereiten (Teil 1)

Danach müssen wir den GRUB Bootloader auf der zweiten Festplatte /dev/hdb installieren:

grub

Gib in der GRUB Kommandozeile folgende Befehle ein:

root (hd0,0)


grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)


grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

root (hd1,0)


grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)


grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

quit

Wieder zurück in der normalen Kommandozeile, starten wir das System neu und hoffen, dass es ohne Fehler von unseren RAID Arrays aus startet:

reboot


7 /dev/hda vorbereiten

Wenn alles gut gegangen ist, solltest Du nun /dev/md0 und /dev/md2 in der Ausgabe von folgendem Befehl sehen

df -h


[root@server1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 4.4G 757M 3.4G 18% /
/dev/md0 167M 9.0M 150M 6% /boot
[root@server1 ~]#

Die Ausgabe von

cat /proc/mdstat

sollte wie folgt aussehen:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hdb5[1]
417536 blocks [2/1] [_U]

md0 : active raid1 hdb1[1]
176576 blocks [2/1] [_U]

md2 : active raid1 hdb6[1]
4642688 blocks [2/1] [_U]

unused devices: <none>
[root@server1 ~]#

Nun müssen wir die Partitionstypen unserer drei Partitionen auf /dev/hda zu Linux raid autodetect ändern:

fdisk /dev/hda

[root@server1 ~]# fdisk /dev/hda


Command (m for help): <-- t
Partition number (1-6): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-6): <-- 5
Hex code (type L to list codes): <-- fd
Changed system type of partition 5 to fd (Linux raid autodetect)

Command (m for help): <-- t
Partition number (1-6): <-- 6
Hex code (type L to list codes): <-- fd
Changed system type of partition 6 to fd (Linux raid autodetect)

Command (m for help): <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@server1 ~]#

Nun können wir /dev/hda1, /dev/hda5 und /dev/hda6 den entsprechenden RAID Arrays hinzufügen:

mdadm --add /dev/md0 /dev/hda1
mdadm --add /dev/md1 /dev/hda5
mdadm --add /dev/md2 /dev/hda6

Sieh Dir nun dies an

cat /proc/mdstat

... Du solltest feststellen, dass die RAID Arrays synchronisiert werden:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda5[2] hdb5[1]
417536 blocks [2/1] [_U]
resync=DELAYED

md0 : active raid1 hda1[0] hdb1[1]
176576 blocks [2/2] [UU]

md2 : active raid1 hda6[2] hdb6[1]
4642688 blocks [2/1] [_U]
[======>..............] recovery = 34.4% (1597504/4642688) finish=1.0min speed=50349K/sec

unused devices: <none>
[root@server1 ~]#

(Du kannst Folgendes ausführen

watch cat /proc/mdstat

um eine anhaltende Ausgabe des Prozesses zu erhalten. Um watch zu verlassen, drücke STRG+C.)

Warte bis die Synchronisation abgeschlossen ist (die Ausgabe sollte dann wie folgt aussehen:

[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda5[0] hdb5[1]
417536 blocks [2/2] [UU]

md0 : active raid1 hda1[0] hdb1[1]
176576 blocks [2/2] [UU]

md2 : active raid1 hda6[0] hdb6[1]
4642688 blocks [2/2] [UU]

unused devices: <none>
[root@server1 ~]#

).

Passe dann /etc/mdadm.conf der neuen Situation an:

cp -f /etc/mdadm.conf_orig /etc/mdadm.conf
mdadm --examine --scan >> /etc/mdadm.conf

/etc/mdadm.conf sollte nun in etwas so aussehen:

cat /etc/mdadm.conf


# mdadm configuration file
# # mdadm will function properly without the use of a configuration file, # but this file is useful for keeping track of arrays and member disks. # In general, a mdadm.conf file is created, and updated, after arrays # are created. This is the opposite behavior of /etc/raidtab which is # created prior to array construction. # # # the config file takes two types of lines: # # DEVICE lines specify a list of devices of where to look for # potential member disks # # ARRAY lines specify information about how to identify arrays so # so that they can be activated # # You can have more than one device line and use wild cards. The first # example includes SCSI the first partition of SCSI disks /dev/sdb, # /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second # line looks for array slices on IDE disks. # #DEVICE /dev/sd[bcdjkl]1 #DEVICE /dev/hda1 /dev/hdb1 # # If you mount devfs on /dev, then a suitable way to list all devices is: #DEVICE /dev/discs/*/* # # # # ARRAY lines specify an array to assemble and a method of identification. # Arrays can currently be identified by using a UUID, superblock minor number, # or a listing of devices. # # super-minor is usually the minor number of the metadevice # UUID is the Universally Unique Identifier for the array # Each can be obtained using # # mdadm -D <md> # #ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371 #ARRAY /dev/md1 super-minor=1 #ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1 # # ARRAY lines can also specify a "spare-group" for each array. mdadm --monitor # will then move a spare between arrays in a spare-group if one array has a failed # drive but no spare #ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1 #ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1 # # When used in --follow (aka --monitor) mode, mdadm needs a # mail address and/or a program. This can be given with "mailaddr" # and "program" lines to that monitoring can be started using # mdadm --follow --scan & echo $! > /var/run/mdadm # If the lines are not found, mdadm will exit quietly #MAILADDR root@mydomain.tld #PROGRAM /usr/sbin/handle-mdadm-events ARRAY /dev/md0 level=raid1 num-devices=2 UUID=6b4f013f:6fe18719:5904a9bd:70e9cee6 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=63194e2e:c656857a:3237a906:0616f49e ARRAY /dev/md2 level=raid1 num-devices=2 UUID=edec7105:62700dc0:643e9917:176563a7

0 Kommentar(e)

Zum Posten von Kommentaren bitte