Creating consistent backups for EBS with EXT4 and quota

What’s this about?
Data security and backups are very important aspects when you work with servers, especially if you are using cloud infrastructure. I am using AWS(Amazon Web Services) as my preferred IaaS, so the following how-to is tailored for Amazon EC2 instances using EBS as storage for the web sites files. On my instance I have Ubuntu 10.04 LTS installed and on top of it I run ISPConfig 3.0.4(latest version at the moment I write this article). Some of the programs required to run this setup were already installed, but it should be pretty obvious if you miss anything. If you need help you can either leave a comment or contact me via email.

The following setup will allow you to create an EBS using EXT4 as file system, with quota enabled on it(for ISPConfig) and weekly backups of the EBS. In case of instance failure you should be able to launch a new instance and attach the EBS, without losing any web sites files. In case of EBS failure you can recreate one from the most recent snapshot.

Create an EBS in the same zone as your instance and attach it to your instance as /dev/sdf. This can be easily done from AWS Management Console.

Install xfsprogs

sudo apt-get install xfsprogs

Create EXT4 filesystem on /dev/sdf

sudo mkfs.ext4 /dev/sdf

Now mount it temporarily

sudo mkdir /mnt/ebs
sudo mount /dev/sdf /mnt/ebs

Stop the apache2 web server and copy the files to /mnt/ebs

sudo service apache2 stop
cd /mnt/ebs
sudo cp -rp /var/www/* .

Prepare quota

touch quota.user quota.group
sudo chmod 600 quota.*

Add the entry to /etc/fstab

/dev/sdf /var/www ext4 noatime,nobootwait,usrjquota=quota.user,grpjquota=quota.group,jqfmt=vfsv0 0 0

Unmount the EBS and remount it to /var/www

sudo umount /dev/sdf
sudo mount /dev/sdf /var/www -o noatime,usrjquota=quota.user,grpjquota=quota.group,jqfmt=vfsv0

Enable quota

sudo quotacheck -avugm
sudo quotaon -avug

Start the apache2 web server and check that the web sites are working properly

sudo service apache2 start

Install ec2-consistent-snapshot script for weekly backups of EBS

sudo add-apt-repository ppa:alestic
sudo apt-get update
sudo apt-get install -y ec2-consistent-snapshot

Prepare first snapshot(I assume the cron will run as root user, hence I create the awssecret file in /root directory)

sudo touch /root/.awssecret
sudo chmod 600 /root/.awssecret

Edit .awssecret and add following lines, in this order, replacing ACCESS_KEY_ID and SECRET_ACCESS_KEY with your own, both can be found under Account->Security Credentials:

ACCESS_KEY_ID
SECRET_ACCESS_KEY

Test the snapshot creation with debug mode activated, replace VOLUME_ID with the right volume ID:

sudo ec2-consistent-snapshot --debug --description "snapshot $(date +\%Y-\%m-\%d-\%H:\%M:\%S)" --freeze-filesystem /var/www vol-VOLUME_ID

If everything went well you should be able to see your new snapshot in the AWS Management Console.

Finally add this to your root crontab (by running sudo crontab -e):

@weekly /usr/bin/ec2-consistent-snapshot --debug --description "snapshot $(date +'%Y-%m-%d %H:%M:%S')" --freeze-filesystem /var/www vol-VOLUME_ID>>/var/log/backup.log 2>&1

Make sure you put the correct VOLUME_ID!

This should be all, you now have all your web sites on EBS, quota is enabled and weekly backups enabled. I think I pretty much nailed everything you need in order to perform this setup, but if there are any issues feel free to leave a comment. Also I love getting feedback so if you found this article useful leave a comment also ๐Ÿ™‚

Install ffmpeg on Ubuntu 10.04

Note: These are my instructions for installing ffmpeg on ubuntu linux 10.04 server (lts). Most of the code here can be found on ubuntu forums also and probably you will find more stuff there.

Install requisite packages

sudo apt-get update
sudo apt-get install build-essential git-core checkinstall texi2html libopencore-amrnb-dev libopencore-amrwb-dev libsdl1.2-dev libtheora-dev libvorbis-dev libx11-dev libxfixes-dev zlib1g-dev automake autoconf libxvidcore-dev

Install latest version of yasm

cd
git clone git://github.com/yasm/yasm.git
cd yasm
sh autogen.sh
make
sudo checkinstall --pkgname=yasm --pkgversion="1.1.0" --backup=no --deldoc=yes --default

Install x264

cd
git clone git://git.videolan.org/x264
cd x264
./configure
make
sudo checkinstall --pkgname=x264 --pkgversion "1:0.svn`date +%Y%m%d`-0.0ubuntu1" --backup=no --deldoc=yes --fstrans=no --install=yes --default

Install LAME for mp3 support

cd
sudo apt-get install nasm
wget http://downloads.sourceforge.net/project/lame/lame/3.99/lame-3.99.tar.gz
tar xzvf lame-3.99.tar.gz
cd lame-3.99
./configure
make
sudo checkinstall --pkgname=lame-ffmpeg --pkgversion="3.99" --backup=no --default --deldoc=yes

Install opencore-amr for amr support

cd
wget http://downloads.sourceforge.net/project/opencore-amr/vo-amrwbenc/vo-amrwbenc-0.1.1.tar.gz
tar zxvf vo-amrwbenc-0.1.1.tar.gz
cd vo-amrwbenc-0.1.1
./configure --disable-shared
make
sudo checkinstall --pkgname="libopencore-amr" --pkgversion="0.1.1" --backup=no --fstrans=no --install=yes --default

Install libtheora for ogg support

cd
wget http://downloads.xiph.org/releases/theora/libtheora-1.1.1.tar.bz2
tar jxvf libtheora-1.1.1.tar.bz2
cd libtheora-1.1.1
./configure --disable-shared
make
sudo checkinstall --pkgname=libtheora --pkgversion "1.1.1" --backup=no --fstrans=no --install=yes --default

Install faac

cd
sudo apt-get install unzip
wget http://downloads.sourceforge.net/faac/faac-1.28.tar.gz
tar zxvf faac-1.28.zip
cd faac-1.28
wget http://www.linuxfromscratch.org/patches/blfs/svn/faac-1.28-glibc_fixes-1.patch
patch -Np1 -i faac-1.28-glibc_fixes-1.patch
sed -i -e '/obj-type/d' -e '/Long Term/d' frontend/main.c
make
sudo checkinstall --pkgname=libfaac --pkgversion "1.28" --backup=no --fstrans=no --install=yes --default

Install FFmpeg

svn checkout svn://svn.ffmpeg.org/ffmpeg/trunk ffmpeg
./configure --enable-gpl --enable-version3 --enable-nonfree --enable-postproc --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-x11grab
make
sudo checkinstall --pkgname=ffmpeg --pkgversion "0.8.5" --backup=no --fstrans=no --install=yes --default

Amazon RDS SUPER privileges

#1419 – You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable

This error occurs sometimes on RDS instances when you try to use procedures. You will soon find out that grant super privilege for a user won’t work. So the only way to make things work is to set log_bin_trust_function_creators to 1.

RDS console available at https://console.aws.amazon.com/rds/ allows you to create a new group and modify its parameters. Log in to RDS console, go to “DB Parameters Groups” and click the “Create DB Parameter Group”. Set the following

  • DB Parameter Group Family: mysql5.1
  • DB Parameter Group Name: mygroup
  • Description: mygroup

Confirm by clicking “Yes, create” button.

Here comes the ugly part, since you cannot edit from the console the parameters (for the moment, I hope they are going to change that). You will need to log to your instance using SSH and download RDS cli from here: http://aws.amazon.com/developertools/2928?_encoding=UTF8&jiveRedirect=1

To do so right click on “Download” button and copy link location. In the SSH window use wget to download and unzip it:

wget "http://s3.amazonaws.com/rds-downloads/RDSCli.zip"
unzip RDSCli.zip

If you don’t have unzip you can quickly get it using “apt-get install unzip”(for ubuntu) or “yum install unzip”(for centos). Of course you will need root privileges.

After successfully unpacking the RDSCli cd to that directory and set a few variables. Following is an example on Ubuntu 10.04:

cd RDSCli-1.4.006
export AWS_RDS_HOME="/home/ubuntu/RDSCli-1.4.006"
export JAVA_HOME="/usr/lib/jvm/java-6-sun"
cd bin
./rds --help

If rds –help outputs no errors then you have set it correctly. Congrats. One more command:

./rds-modify-db-parameter-group mygroup --parameters="name=log_bin_trust_function_creators, value=on, method=immediate" --I="YOUR_AWS_ACCESS_KEY_ID" --S="YOUR_AWS_SECRET_ACCESS_KEY"

The AWS keys can be obtain from your AWS account Security Credentials->Access Credentials->Access Keys.

Go to AWS RDS console, “DB Instances”, select your instance and right click “Modify”. Set “DB Parameter group” to “mygroup” and check “Apply Immediately”. Confirm with “Yes, modify”.

You are done ๐Ÿ™‚

Apple announces iCloud

Warning: offensive language! If you are easily offended stop reading here.

What’s iCloud? It’s Apple’s idea about cloud computing

iCloud stores your music, photos, apps, calendars, documents, and more. And wirelessly pushes them to all your devices โ€” automatically. Itโ€™s the easiest way to manage your content. Because now you donโ€™t have to. 

To be honest it sounds more like Dropbox to me than a serious cloud service like Amazon S3. Nothing innovative, just a lot of marketing as usual. Get an existing idea, put an “i” in front of it and let Steve Jobs present it as the new cool thing. You will get a ton of hype. I know “cloud” is a cool word and very used nowadays but really don’t keep using it for every little s**t.

Looking forward for iF**k, the new Apple’s idea that will let people interact together in a more personal way and will let them share the joy to millions of viewers via iPhone/iPad/iDevice.

Mysql benchmark: RDS vs EC2 performance

the setup: 1 m1.small ec2 instance vs 1 db.m1.small rds instance, tests are being run from the m1.small instance. The goal is to determine how the site will perform when moving the database from localhost to a remote instance.

I used sysbench for mysql benchmarks. On a linux server running ubuntu 10.04 you can simply install it with the following command(it’s obvious but just in case):

sudo apt-get install sysbench

The first tests performed were m1.small EC2 instance running mysql-server 5.1.41-3ubuntu12.8 VS RDS instance type db.m1.small running mysql server 5.1.50. The test database had been set to 10 000 records, number of threads = 1, test oltp.

sysbench --test=oltp --mysql-host=smalltest.us-east-1.rds.amazonaws.com --mysql-user=root --mysql-password=password --max-time=180 --max-requests=0 prepare
sysbench --test=oltp --mysql-host=smalltest.us-east-1.rds.amazonaws.com --mysql-user=root --mysql-password=password --max-time=180 --max-requests=0 run

The results

m1.small EC2 instance db.m1.small RDS instance
OLTP test statistics:
queries performed:
read: 263354
write: 94055
other: 37622
total: 395031
transactions: 18811 (104.50 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 357409 (1985.56 per sec.)
other operations: 37622 (209.01 per sec.)
Test execution summary:
total time: 180.0044s
total number of events: 18811
total time taken by event execution: 179.7827
per-request statistics:
min: 4.04ms
avg: 9.56ms
max: 616.04ms
approx. 95 percentile: 38.42ms
OLTP test statistics:
queries performed:
read: 188230
write: 67225
other: 26890
total: 282345
transactions: 13445 (74.67 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 255455 (1418.74 per sec.)
other operations: 26890 (149.34 per sec.)
Test execution summary:
total time: 180.0573s
total number of events: 13445
total time taken by event execution: 179.9174
per-request statistics:
min: 9.08ms
avg: 13.38ms
max: 904.58ms
approx. 95 percentile: 20.99ms

As you can see the EC2 can perform 40% more transactions than the RDS instance. Nothing unexpected so far.

Time to move on and increase the number of threads to 10

m1.small EC2 instance db.m1.small RDS instance
OLTP test statistics:
queries performed:
read: 264866
write: 94545
other: 37818
total: 397229
transactions: 18899 (104.97 per sec.)
deadlocks: 20 (0.11 per sec.)
read/write requests: 359411 (1996.22 per sec.)
other operations: 37818 (210.05 per sec.)

Test execution summary:
total time: 180.0462s
total number of events: 18899
total time taken by event execution: 1799.9289
per-request statistics:
min: 4.08ms
avg: 95.24ms
max: 2620.70ms
approx. 95 percentile: 445.91ms

OLTP test statistics:
queries performed:
read: 343812
write: 122772
other: 49109
total: 515693
transactions: 24551 (136.18 per sec.)
deadlocks: 7 (0.04 per sec.)
read/write requests: 466584 (2588.13 per sec.)
other operations: 49109 (272.41 per sec.)

Test execution summary:
total time: 180.2788s
total number of events: 24551
total time taken by event execution: 1801.8298
per-request statistics:
min: 13.41ms
avg: 73.39ms
max: 1126.02ms
approx. 95 percentile: 143.83ms

In this test the small RDS instance is faster than the EC2, 136 vs 105 transactions per second. I’ve also benchmarked a large RDS instance (the next one available after db.m1.small) and it got 185 transactions per second. Quite good, but the price is 4x higher.

The next test was performed vs a 10 million records, 16 threads. This time I only benchmarked a small and a large RDS instance. The large instance managed to do 228 transactions per second while the small one got a decent score of 127 transactions. One thing I noticed during this test is that the small instance started to use it’s swap, while the large one did not have this issue. This is probably due to the fact that 10M records db is aprox 2.5GB and the small RDS only has 1.7GB of RAM.

So if you are planing to grow and want an easy way to do it, switching your database to its own RDS is one of the first things you should consider. One of the immediate effects you will notice is that the CPU usage on the EC2 instance will be greatly reduced, leaving more power for the web server. You can easily increase the size and capacity of the RDS instance with just a few clicks. The backups are done automatically, which is great considering how many times I had to recover databases.

Cum sa creezi o pagina pentru firma ta pe Facebook?

M-am hotarat sa scriu acest articol in urma numarului imens de pagini pentru firme create total gresit, si anume ca si pagini personale. Asa cum in realitate ai persoana fizica si persoana juridica, si pe Facebook ai pagini personale si pagini pentru firme(sau artist sau organizatie sau orice altceva). E total gresit sa folosesti o pagina personala pentru promovarea imaginii firmei pe Facebook din mai multe motive:

  • NU poti sa fi prieten cu pantoful Nike, dar poti fi fan Nike, logic nu?
  • Daca folosesti un cont personal va trebui sa dai accept la toate friends requests, pe cand folosind o pagina conceputa special pentru firme cei interesati au un buton de Like si apar automat in lista de fani
  • Paginile pentru firme sunt concepute special pentru a putea afisa orar/site web/etc intr-un mod usor accesibil vizitatorilor
  • Paginile pentru firme pot fi promovate free(via Sugest) sau pe bani
  • Paginile pentru firme ofera statistici referitoare la numarul de vizitatori etc

Marea problema deriva din faptul ca pentru a avea o pagina de prezentare pentru firma trebuie sa ai un cont personal de Facebook. Mare tampenie. Din aceasta cauza multi oameni care doresc sa faca un cont pentru firma seteaza din greseala tot un cont personal fara sa isi dea seama.

Vorba multa saracia omului, asa ca sa trecem la treaba:

  1. trebuie sa aveti un cont pe facebook si sa fiti logat
  2. dati click pe http://www.facebook.com/pages/create.php
  3. din lista prezentata selectati categoria la care va incadrati cel mai bine(local business daca aveti un magazin sau un salon unde vindeti ceva, brand pentru a promova un anumit produs, etc)
  4. completati campurile necesare, bifati casuta “I agree to bla bla” si apasati butonul “Get Started”.

Acum puteti sa uploadati o poza (logo), sa sugerati pagina nou creata prietenilor si multe alte lucruri pe care le veti descoperi si singuri.
Welcome to Facebook!

PS.

mod_rewrite in action

After switching from blogspot to my own wordpress blog I noticed a lot of 404s. These were triggered by the url change, because initially the articles on blogspot had html extension at the end of url while on wordpress there wasn’t such thing. For example if initially the link was http://blog.getasysadmin.com/some-article.html now it become http://blog.getasysadmin.com/some-article/.

Quick fix via mod_rewrite, simply edit .htaccess from the root of the website and add this line after RewriteBase:

RewriteRule ^(.*)\.html$ $1/ [R=301,NC,L]

R=301 means Redirect with 301 code(moved permanently)
NC=no case or case insensitive
L=last rule

Mysql max_allowed_packet error

You are probably here because you tried to import a big database (several GB) and got the following error:

ERROR 1153 (08S01) at line 2533: Got a packet bigger than 'max_allowed_packet' bytes

If you have access to your mysql server and SUPER privileges things are easy, you just need to log in as superuser to mysql and type this:

mysql>set global max_allowed_packet=64*1024*1024;

and then import the database normally, just adding “–max_allowed_packet=64M” to the parameter list. Example:

$mysql --max_allowed_packet=64M database < database.sql

Everything is so easy. But if you are using Amazon RDS you are out of luck. You setup a user when you create the instance but of course it doesn’t have the SUPER privilege so if you try to execute the above command it will fail. Not even “grant super on *.* to myuberuser” will help you, no no. So after some googling and reading a lot of crap I found this blog which had the same error as mine. Yuppy! Thanks Henry!

The solution is to use DB Parameter Groups. Grab your mouse and start copy pasting fast.

Download Amazon RDS Command Line Toolkit
The latest version can be found here

wget http://s3.amazonaws.com/rds-downloads/RDSCli.zip
unzip RDSCli.zip
cd RDSCli-1.3.003 (this will surely change so make sure you cd to the right directory)
export AWS_RDS_HOME=`pwd`
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk (this may vary depending on your java location and may not have to set it)
cp credential-file-path.template credential-file
vi credential-file (set your aws credentials there, use whatever text editor you like)
export AWS_CREDENTIAL_FILE=${AWS_RDS_HOME}/credential-file
cd bin
./rds --help

If everything went well you should get some output. On Henry blog he says he suggests that you create a parameter group. Well the reality is you have to create it since Amazon won’t let you modify parameters inside the default group.

./rds-create-db-parameter-group mygroup -f MySQL5.1 -d "My group"
./rds-modify-db-parameter-group mygroup --parameters "name=max_allowed_packet,value=67108864,method=immediate"
./rds-modify-db-instance YOURINSTANCENAMEHERE --db-parameter-group-name mygroup

Go to Amazon management console and check that the new parameter group is created and applied to your instance. You can begin now the import as you would do normally just add “–max_allowed_packet=64M” to the list of your options.

Hope it helps!

Are you a pirate?

Most people have an aversion to risk, my college economics professor told me. Which means they have to be rewarded to take on that risk. The higher the risk, the higher the possible payout has to be for people to jump.

Once in a while I manage to find a good article, one I really like. Today I found it on techcrunch. Michael Arrington makes an analogy between entrepreneurs and pirates ๐Ÿ™‚ Interesting post and nice comments also.

One more quote because I really enjoyed this one:

But at no point did I ever consider getting a โ€œreal job.โ€ That felt like a black and white world, and I wanted technicolor.

Full article here : http://techcrunch.com/2010/10/31/are-you-a-pirate/.

Amazon announces Free Usage Tier

Beginning from 1st of November you will be able to run a micro instance on Amazon cloud for free. Micro instance has 613 MB memory, up to 2 ECU(for short period bursts),32/64 bit platform and only has access to a 10 GB EBS(provided also for free). Also you get 5 GB to use with S3(simple storage) which is great for saving images (for example) and 30 GB data transfer(15 GB in and 15 GB out).

I believe this is a great opportunity to see what AWS is and also you may start to develop applications for cloud. You will probably want to use the EBS for MySQL storage and the S3 for media files.

Make sure you read all about this on their site before you use it and note that if you exceed the micro instance limits you are going to pay per use. Although the rates are low it never hurts to double check if you want to stay at 0$/month.

Once this offer is available (starting from 1st November) I am going to start testing a lot of stuff like how WordPress performs on this instance and how to install it so make sure you visit my blog from time to time! If you are interested in certain software installation on the amazon cloud platform feel free to send me an email and I will see if I can help.