Showing posts with label Setting. Show all posts
Showing posts with label Setting. Show all posts

Monday, February 7, 2011

Setting VPS Disk Space with OpenVZ

Disk space can be easily controlled via OpenVZ but I have yet to find anyone to actually explain what the heck to “really” do when you need to add more! Everything I have found about OpenVZ just explains the parameters and never shows you how to do it easily. When I need to adjust disk space on an VPS it is usually when I have someone beating up my ear on the phone or my IM so I needed a fast way to expand the disk without worrying about the details.

There are three parameters in OpenVZ which are directly related to disk usage. They are disk_quota, diskspace and diskinodes. NOTE: there are a lot of other parameters that control and effect the disk but this tutorial will only cover the basics!

The parameter disk_quota is a YES or NO value which disables the file system quotas; if you are not worried about the quotas set it to NO and stop reading. Otherwise; leave it set at YES and continue.

The parameter diskspace is the count of 1K blocks available to the VPS in a soft and hard limit. The hard limit is a stop point similar to filling up a physical disk – when you are out, you are out. The soft limit is when the bean counters get angry and the quotatime timer starts. On a basic installation and VPS setup you will have a 1048576 1K blocks as a soft limit and 1153024 1K blocks as a hard limit. The numbers are not crazy as they are derived from base2. Thus, 1048576 1K blocks is 1GB of disk space. Add an additional 10.2MB to the disk space and you arrive at the 1048576 1K blocks. These are the basic numbers for the basic template that ships with OpenVZ.

The parameter diskinodes is the total number of files, directories and links you can have in the container. Think of them as Post-it® notes and each file, directory and link gets a single note. The default basic number is 200,000 for a soft limit for 1GB of disk space and 220,000 for the hard limit. Normally *nix systems will set aside enough inodes for one inode per 4K disk space block. In the default template for OpenVZ they are setting aside enough inodes for 5.2K blocks. Which I’ll write off as either (a) a magic number or (b) a unique calculation I am not familiar with. Thus, because the 4K block inode count for 1GB of disk space should be 262,144 inodes we’ll use the default template values for our calculations and simply multiply times the number of GB requested.

So…

Now…

The question is how do you adjust them quickly and easily. In this example we are going to work with units of GBs. If you need more granularity you will need to divide it back out to MBs but Gigabytes works great for our needs:

First, we need to define the soft and hard limits, next we apply the updated diskspace numbers and finally set the inode numbers correctly based on the ratio we know from the default template:

Here are the commands (and note below for a quick and easy Perl script):

cid=1324gb=5vzctl set ${cid} --diskspace $((1048576 * ${gb})):$((1153434 * ${gb})) --savevzctl set ${cid} --diskinodes $((200000 * ${gb})):$((220000 * ${gb})) --save#!/usr/bin/perl # display the commands to update an OpenVZ VPS with new disk space requirements# 2009/11/15 - Chris Schuld (chris@chrisschuld.com) use strict; print "Enter VPS CID: "; my $_CID = ; chomp($_CID);print "Enter SOFT Diskspace Limit (ex 10GB):"; my $_SOFT = ; chomp($_SOFT); $_SOFT =~ s/[^0-9]//g;print "Enter HARD Diskspace Limit (ex 11GB):"; my $_HARD = ; chomp($_HARD); $_HARD =~ s/[^0-9]//g;my $_INODE_SOFT = ( 200000 * $_SOFT );my $_INODE_HARD = ( 220000 * $_HARD );print "Run these commands:\n";print "vzctl set $_CID --diskspace ".$_SOFT."G:".$_HARD."G --save\n";print "vzctl set $_CID --diskinodes $_INODE_SOFT:$_INODE_HARD --save\n";

Tuesday, September 21, 2010

Setting Up An NFS Server And Client On OpenSUSE 11.3

Follow me on Twitter
Last edited 09/14/2010

This guide explains how to set up an NFS server and an NFS client on OpenSUSE 11.3. NFS stands for Network File System; through NFS, a client can access (read, write) a remote share on an NFS server as if it was on the local hard disk.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

I'm using two OpenSUSE systems here:

NFS Server: server.example.com, IP address: 192.168.0.100 NFS Client: client.example.com, IP address: 192.168.0.101

 

2 Installing NFSserver:

On the NFS server we run:

yast2 -i nfs-kernel-server

Then we create the system startup links for the NFS server and start it:

chkconfig --add nfsserver
/etc/init.d/nfsserver start

client:

On the client we can install NFS as follows:

yast2 -i nfs-client

 

3 Exporting Directories On The Serverserver:

I'd like to make the directories /home and /var/nfs accessible to the client; therefore we must "export" them on the server.

When a client accesses an NFS share, this normally happens as the user nobody. Usually the /home directory isn't owned by nobody (and I don't recommend to change its ownership to nobody!), and because we want to read and write on /home, we tell NFS that accesses should be made as root (if our /home share was read-only, this wouldn't be necessary). The /var/nfs directory doesn't exist, so we can create it and change its ownership to nobody and nogroup:

mkdir /var/nfs
chown nobody:nogroup /var/nfs

Now we must modify /etc/exports where we "export" our NFS shares. We specify /home and /var/nfs as NFS shares and tell NFS to make accesses to /home as root (to learn more about /etc/exports, its format and available options, take a look at

man 5 exports

)

vi /etc/exports

# See the exports(5) manpage for a description of the syntax of this file.# This file contains a list of all directories that are to be exported to# other computers via NFS (Network File System).# This file used by rpc.nfsd and rpc.mountd. See their manpages for details# on how make changes in this file effective./home 192.168.0.101(rw,sync,no_root_squash,no_subtree_check)/var/nfs 192.168.0.101(rw,sync,no_subtree_check)

(The no_root_squash option makes that /home will be accessed as root.)

Whenever we modify /etc/exports, we must run

exportfs -a

afterwards to make the changes effective.

 

4 Mounting The NFS Shares On The Clientclient:

First we create the directories where we want to mount the NFS shares, e.g.:

mkdir -p /mnt/nfs/home
mkdir -p /mnt/nfs/var/nfs

Afterwards, we can mount them as follows:

mount 192.168.0.100:/home /mnt/nfs/home
mount 192.168.0.100:/var/nfs /mnt/nfs/var/nfs

You should now see the two NFS shares in the outputs of

df -h

client: