Expand and Shrink LVM [cautious]
Jan 06, 2021 | 710 views
Be really careful and cautious when shrinking a LVM volume.
Refer to
Comments: 0Tags: Storage
Jan 06, 2021 | 710 views
Be really careful and cautious when shrinking a LVM volume.
Refer to
Comments: 0Dec 29, 2020 | 713 views
NFS failed to restart due to below mount error
### Error as below:
May 14 00:03:30 rbx06 systemd[1]: dev-disk-by\x2duuid-62ccfba0\x2d6394\x2d42c0\x2dbd38\x2d3da2ea4893b6.device: Job dev-disk-by\x2duuid-62ccfba0\x2d6394\x2d42c0\x2dbd38\x2d3da2ea4893b6.device/start timed out.
May 14 00:03:30 rbx06 systemd[1]: Timed out waiting for device /dev/disk/by-uuid/62ccfba0-6394-42c0-bd38-3da2ea4893b6.
May 14 00:03:30 rbx06 systemd[1]: Dependency failed for /dev/disk/by-uuid/62ccfba0-6394-42c0-bd38-3da2ea4893b6.
### or like this:
Oct 31 20:28:20 thutmose kernel: EXT4-fs (sdc1): mounting ext3 file system using the ext4 subsystem
Oct 31 20:28:20 thutmose kernel: EXT4-fs (sdc1): warning: maximal mount count reached, running e2fsck is recommended
Oct 31 20:28:20 thutmose kernel: EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: (null)
Oct 31 20:28:20 thutmose sudo[18994]: pam_unix(sudo:session): session closed for user root
Oct 31 20:28:20 thutmose systemd[1]: mnt-attorney.mount: Unit is bound to inactive unit dev-disk-by\x2dlabel-attorney.device. Stopping, too.
Oct 31 20:28:20 thutmose systemd[1]: Unmounting /mnt/attorney...
Oct 31 20:28:21 thutmose systemd[1]: Unmounted /mnt/attorney.
Solution:
# run this command
systemd daemon-reload
## restart rpcbind and nfs
exportfs -a # /etc/exports was updated
systemctl restart rpcbind nfs
Refer to:
Dec 18, 2020 | 3949 views
Create RAID10:
# Create RAID10,
# HELP sudo storcli /cx add vd type=[RAID0(r0)|RAID1(r1)|...] drives=[EnclosureID:SlotID|:SlotID-SlotID|:SlotID,SlotID]
820 storcli64 /c0 add vd r10 drives=8:0,1,2,3 pdperarray=2
# find which virtual group is raid10, which is v0 in here
821 storcli64 /c0 show
822 storcli64 /c0/v0 show
823 storcli64 /c0/v0 show all
824 storcli64 /c0/v0 show all | less
# initialize RAID10
825 storcli64 /c0/v0 start init
826 storcli64 /c0/v0 show init
827 storcli64 /c0/v0 show all | less
828 storcli64 /c0/vall show all | less
Create RAID 6:
# Create RAID 6
831 storcli64 /c0 add vd type=raid6 drives=8:4-15
832 storcli64 /c0/vall show all | less
# find which virtual device id is raid 6 just created
833 storcli64 /c0/vall show
834 storcli64 /c0/v2 show
835 storcli64 /c0/v2 show all
836 storcli64 /c0 show all
838 storcli64 /c0/v2 show all
839 storcli64 /c0/v2 show all | more
# initialize RAID6
841 storcli64 /c0/v2 start init
842 storcli64 /c0/v2 show init
843 storcli64 /c0/v2 show
844 storcli64 /c0/v2 show all
845 storcli64 /c0/vall show
Example of Creating RAID 50, RAID 60
StorCLI:
# RAID 10
C:\>storcli64 /C0 add vd type=raid10 drives=83:5,6,7,8 pdperarray=2
# RAID 50
C:\>storcli64 /C0 add vd type=raid50 drives=83:5,6,7,8,9,10 pdperarray=3
# RAID 60
C:\>storcli64 /C0 add vd type=raid60 drives=83:5,6,7,8,9,10,11,12 pdperarray=4
Refer to
Dec 01, 2020 | 752 views
About Storage Controller or RAID Controller (refer to below [5] of its page 11)
"Note that in the case where there is no cache-mirroring link between the controllers, the cache on the controllers must be disabled entirely to ensure that the file system does not become corrupted in the event of a controller failure."
Reference:
Lustre with LDISKFS vs ZFS: