I was recently presented with a situation where we would need a standard ext3 partition, greater than 2.0TB on a basic server installation of RHEL5. This was for a backup disk that would host dumps and exports from an Oracle instance on a redundant SAN LUN that was presented as a thick-provisioned 2.5TB volume.

The downside to working with partitions larger than 2.0TB is that the regular fdisk utility, that we all know and love, doesn’t support partitions of this size. So, if you want to have a 2.5TB disk then we have to use parted, else you’ll have to have a bunch of different partitions, jumbled together with LVM . This blog post assumes that you already have the LUN connected in one way or another — for the purposes of this example, I have used device multipath to map the SAN LUN. Also, so that there is no confusion, you do not have to perform the steps of putting the disk into a LVM VG — I did this because it is the standard for creating disks at this location. Personally, I think it’s a good idea anyway — it doesn’t take anything away from the end result, and you have the flexibility of having the LVM later on.

Alright, bare metal:


[root@host ~]# parted /dev/mapper/mpath1
GNU Parted 1.8.1
Using /dev/mapper/mpath1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print

Model: Linux device-mapper (dm)
Disk /dev/mapper/mpath1: 2792GB
Sector size (logical/physical): 512B/512B
Partition Table:

Number  Start   End     Size    File system  Name     Flags

(parted) mklabel gpt
(parted) mkpart primary 0 2792GB
(parted) quit
Information: Don't forget to update /etc/fstab, if necessary.

[root@host ~]# pvcreate /dev/mapper/mpath1p1
Physical volume "/dev/mapper/mpath1p1" successfully created
[root@host ~]# pvdisplay

<... snip ...>

"/dev/mapper/mpath1p1" is a new physical volume of "2.54 TB"
--- NEW Physical volume ---
PV Name               /dev/mapper/mpath1p1
VG Name
PV Size               2.54 TB
Allocatable           NO
PE Size (KByte)       0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               c42f5v-RJ5c-CrKr-y0cr-5On3-IqRk-P0dpmn

[root@host ~]# vgcreate oraback /dev/mapper/mpath1p1
Volume group "oraback" successfully created
[root@host ~]# vgdisplay

<... snip ...>

--- Volume group ---
VG Name               oraback
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  1
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                0
Open LV               0
Max PV                0
Cur PV                1
Act PV                1
VG Size               2.54 TB
PE Size               4.00 MB
Total PE              665599
Alloc PE / Size       0 / 0
Free  PE / Size       665599 / 2.54 TB
VG UUID               T2aiL8-Mwy4-JS1x-fF6K-T7M7-m0dj-BkD6Q2

<... snip ...>

[root@host ~]# lvcreate -n backups -l 665599 oraback
Logical volume "orabackups" created
[root@host ~]# lvdisplay

<... snip ...>

--- Logical volume ---
LV Name                /dev/oraback/backups
VG Name                oraback
LV UUID                5u55Vn-xVlM-6a82-XXr5-2aBo-d0Wf-ZqFbtq
LV Write Access        read/write
LV Status              available
# open                 0
LV Size                2.54 TB
Current LE             665599
Segments               1
Allocation             inherit
Read ahead sectors     auto
- currently set to     256
Block device           253:7

<... snip ...>

[root@host ~]# mkfs.ext3 /dev/oraback/backups
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
340787200 inodes, 681573376 blocks
34078668 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
20800 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@host /]# cat /etc/fstab | grep oraback
/dev/oraback/backups                   /orabackups                                           ext3              defaults     0 0
[root@host /]# df -h /orabackups
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/oraback-backups
2.5T  203M  2.4T   1% /orabackups
[root@host /]#

This being a SATA LUN, took me about 35 minutes to format and create the journal.

Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

© 2013 Dan's Blog Suffusion theme by Sayontan Sinha