Create system disk + raid 0 over 2 disks with rocks 7 cluster
Posted on April 12, 2018 • 3 minutes • 468 words • Suggest Changes
I have been playing with setting up a Rocks 7 cluster, our compute nodes have 3 disk slots. One should be used for the system, and the other two can be used in a RAID 0, which provides faster read/writes and no redundancy or safety. But for a cluster where data is never “stored” that is fine.
Configuration
To get a custom partition, one needs to copy a file _custom-partition.xml _to the site-profile directory :
cp /export/rocks/install/rocks-dist/x86_64/build/nodes/custom-partition.xml /export/rocks/install/site-profiles/7.0/nodes/replace-custom-partition.xml
Then the content needs to be adapted. This protocol is based on Red Hat kickstart. I used this config :
<?xml version="1.0" standalone="no"?> <kickstart roll="base"> <!-- Custom Partitioning Node --> <pre> <!-- clean 3 disks, 50gb root, 10gb swap, 10gb /var, raid0 /state/partition1 --> echo "clearpart --all --initlabel --drives=sda,sdb,sdc part / --size 50000 --ondisk sda part swap --size 20000 --ondisk sda part /var --size 10000 --ondisk sda part raid.00 --size 1 --grow --ondisk sdb part raid.01 --size 1 --grow --ondisk sdc raid /tmp --level=0 --device=md0 raid.00 raid.01" > /tmp/user_partition_info </pre> </kickstart>
Beside the xml tags, important are :
clearpart –all –initlabel –drives=sda,sdb,sdc
This initializes the disks, drives should be available under /dev/sd*
part / –size 50000 –ondisk sda
For the root I take a 50GB partition, on the primary disk (system disk)
part raid.00 –size 1 –grow –ondisk sdb
This is where the raid disk get configured, I set it to the maximal size of the disk, be sure to change the -ondisk parameter.
raid /tmp –level=0 –device=md0 raid.00 raid.01
Here you define the raid, if you want to be allot more safer, level=1 would also help with read speed, although at the cost of 50% storage.
> /tmp/user_partition_info
important to note is, that “>” should be encoded otherwise it won’t work. As described in the documentation.
Force reinstall
If like me your cluster is already installed, you are going to want to force this setup on the nodes. Anyways, even if you still need to install the nodes you need to push this configuration into your active distribution you are sending over PXE. This can be done :
cd /export/rocks/install rocks create distro
In case you already installed some nodes, you need to remove .rocks-release file from each first partition. This can be done using the provided script : (/opt/rocks/nukeit.sh)
for file in $(mount | awk '{print $3}') do if [ -f $file/.rocks-release ] then rm -f $file/.rocks-release fi done
Then run it : (for compute-0-0!)
ssh compute-0-0 'sh /opt/rocks/nukeit.sh'
Once that is done, you can set the database to remove partition table and reinstall upon next boot.
rocks remove host partition compute-0-0 rocks set host boot action=install compute-0-0
To finish off with a reboot, which in turn will initiate the reinstall.
ssh compute-0-0 'reboot -h now'
Perhaps a forced kickstart might be required :
/boot/kickstart/cluster-kickstart
Happy computing !