Cloud providers compared

Cloud Providers

Amazon offers EC2(Elastic Compute Cloud) as IaaS(Infrastructure as a Service). Maybe the most well known cloud provider, provides a wide variety of instances to use starting from 10 cents/hour up to 80 cents/hour.
Rackspace offers Cloud Servers, the equivalent of Amazon’s EC2. The prices start from 1.5 cents/hour for 256 MB instance and go up to 96 cents/hour for their top instances.
GoGrid have, in their customer opinion, best UI(User Interface). Their prices seem a bit high, but with their prepaid plans prices go down to 10 cents/hour, similar to Amazon.

All three providers offer file storage also:
-Amazon offers S3 (15 cents/GB)
-Rackspace has Cloud Files(15 cents/GB)
-GoGrid has Cloud Storage in Beta (15 cents/GB)
So not much to say here. Prices are the same for storing.

But for uploading / downloading the files you store they also charge a tax.
-S3 will cost 1 cent per 1000 PUT/POST/COPY/LIST requests and 1 cent for 10 000 GET requests(when a user requests an image). DELETE is free 🙂
-Rackspace will cost 2 cents per 1000 PUT/POST/LIST requests and it’s FREE for GET requests … which I believe is an advantage for mEgo.
-Couldn’t find any reasonable info about GoGrid on this matter.

Another important aspect is the bandwidth cost:
-Amazon 10c/17c for each 1 GB uploaded/downloaded
-Rackspace 8c/22c for each 1 GB uploaded/downloaded
-GoGrid with no plan you get to pay FREE/50c for each GB. But for 200$ a month we can get down to 20c/GB for downloads(Transfer 1TB plan).

Case study: 2 web servers each with 2 GB of RAM and one database server with 8 GB of RAM, 500 GB file storage, 400GB inbound transfers and 1TB outbound transfer.

2x c1.medium = 288$ (2 x 0.4$/h x 720h)
1x c1.high = 576$ (well this one has 7GB of RAM) (0.8$/h x 720h)
500GB file storage = 75$
400GB inbound = 40$
1TB outbound = 170$
Total 1149$/month

2x 2048MB = 172$(2 x 0.12$/h x 720)
1x 8192MB = 345$(0.48$/h x 720)
500 GB file storage = 75$
400GB inbound = 32$
1TB outbound = 220$
Total 844$/month

GoGrid – for this one I will make use of their prepaid plans Advanced Cloud(499$/month for 5 000 RAM hours) and Tranfer 1TB plan(200$/month for 1TB outbound)
(2 x 2GB + 1x 8GB) * 720 = 8640 RAM hours at a price of 0.10$/hour which means 864$
500GB file storage = 75$
400GB inbound = 0$
1TB outbound = 200$
Total 1139$/month
LE: See Michael’s comment for additional info regarding GoGrid

More Apache benchmarks using EC2 instances

Back with more benchmarks … This time I’ve tested several EC2 instances, using the AMIs provided by scalr for application roles(app and app64). Site used for testing was mEgo.
c1.medium results:

Concurrency Level: 200
Time taken for tests: 311.715596 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 306810000 bytes
HTML transferred: 303610000 bytes
Requests per second: 32.08 [#/sec] (mean)
Time per request: 6234.312 [ms] (mean)
Time per request: 31.172 [ms] (mean, across all concurrent requests)
Transfer rate: 961.19 [Kbytes/sec] received

m1.large results:

Concurrency Level: 200
Time taken for tests: 420.241673 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 308313369 bytes
HTML transferred: 305097689 bytes
Requests per second: 23.80 [#/sec] (mean)
Time per request: 8404.834 [ms] (mean)
Time per request: 42.024 [ms] (mean, across all concurrent requests)
Transfer rate: 716.46 [Kbytes/sec] received

c1.xlarge results:

Concurrency Level: 200
Time taken for tests: 70.404865 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 306810000 bytes
HTML transferred: 303610000 bytes
Requests per second: 142.04 [#/sec] (mean)
Time per request: 1408.097 [ms] (mean)
Time per request: 7.040 [ms] (mean, across all concurrent requests)
Transfer rate: 4255.66 [Kbytes/sec] received

m1.xlarge results:

Concurrency Level: 200
Time taken for tests: 215.153753 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 308098602 bytes
HTML transferred: 304885162 bytes
Requests per second: 46.48 [#/sec] (mean)
Time per request: 4303.075 [ms] (mean)
Time per request: 21.515 [ms] (mean, across all concurrent requests)
Transfer rate: 1398.43 [Kbytes/sec] received

So to sum things up:
c1.medium can serve about 32 requests per second for 0.20$ per hour.
m1.large can serve less than c1.medium, around 24 requests per second for 0.40$ per hour … not nice 🙁
m1.xlarge can serve 47 requests per second, but it will cost you 0.80$ per hour.
The champion is c1.xlarge can serve 142 requests per second at same price as m1.xlarge, 0.80$ pe hour.

I didn’t include m1.small benchmarks, but from a previous post I can tell you it only server 6 requests per second, not a worthy instance.

My advice would be to go with c1.medium instances since they offer best price/performance ratio. But do the math for yourself!

MySQL benchmarks using Amazon EC2 instances

Here are some tests I’ve run on Amazon using AMIs provided by scalr for the mysql role. I’ve used the benchmark scripts supplied by MySQL located in /usr/share/mysql/sql-bench. I had to install a package before running the tests:

apt-get install libdbd-pg-perl

After that everything was simple:

root@ec2# mysql
mysql> create database test;
mysql> quit;
root@ec2# cd /usr/share/mysql/sql-bench
root@ec2# perl run-all-tests --dir='/root/'

For EBS tests I’ve done the following:
-created 1GB EBS volume in scalr
-attached it to the instance I was testing
-notice the device name (/dev/sdb for example)

root@ec2# apt-get install xfsprogs
root@ec2# mkfs.xfs /dev/sdb
root@ec2# mkdir /mnt/storage
root@ec2# cp -R /var/lib/mysql /mnt/storage/
root@ec2# chown mysql:mysql -R /mnt/storage/mysql

-edit /etc/mysql/my.cnf and change datadir from “/var/lib/mysql” to “/mnt/storage/mysql”
-restart mysql server and start the tests:

root@ec2# /etc/init.d/mysql restart
root@ec2# mysql
mysql> drop database test;
mysql> create database test;
mysql> quit;
root@ec2# cd /usr/share/mysql/sql-bench
root@ec2# perl run-all-tests --dir='/root/'

Instances types used and their codes:

m1.small(0.10$/hour) – Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform

m1.large(0.40$/hour) – Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform

c1.medium(0.20$/hour) – High-CPU Medium Instance 1.7 GB of memory, 5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each), 350 GB of instance storage, 32-bit platform

c1.xlarge(0.80$/hour) – High-CPU Extra Large Instance 7 GB of memory, 20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform

EC2 Compute Unit (ECU) – One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.

seconds usr sys cpu tests
m1.small 1823 196.54 28.66 225.2 3425950
m1.small+ebs 1646 197.18 29.61 226.79 3425950
m1.large 1072 157.06 26.97 184.03 3425950
m1.large+ebs 1088 154.23 25.23 179.46 3425950
c1.medium 902 131.18 25.63 156.81 3425950
c1.medium+ebs 901 130.76 28.84 159.6 3425950
c1.xlarge 704 123.31 32.8 156.11 3425950
c1.xlarge+ebs 781 121.02 29.52 150.54 3425950

Bellow you can see a nice chart with how much time it took for each instance to finish the benchmark tests. Either I did something terribly wrong or EBS doesn’t improve MySQL performance.

Small benchmark using ab on ec2 instances

I’ve performed a few small benchmarks on EC2 recently on m1.small and c1.medium using ab(Apache HTTP server benchmarking tool). The command used was:

ab -n 1000 -c 10 localhost/

n is the number of requests
c is the number of concurent requests

I’ve used localhost to measure the performance of the instance without taking into consideration the bandwidth.

The image used was ami-bac420d3 aka scalr app, 32 bit machine.

m1.small gave a very bad result, only 6-8 requests/second.
c1.small gave somewhat a better result, but still a long way to go… 28-30 requests/second.
On a production server, which already had traffic on it I get somewhere around 60 requests/second.

As you can see m1.small is good only for playing around with Amazon service, but not for real stuff.

I know there are a lot of things that can be done to improve performance and so on, but just wanted to show you all some results.

Useful Firefox add-ons

This is a list of add-ons I use daily:
Firebug for Web Development
A must add-on for anyone that is doing web development.You can edit, debug, and monitor CSS, HTML, and JavaScript live in any web page…

Nagios Checker
for server monitoring
The statusbar indicator of the events from the network monitoring system Nagios. What else can be said?

GMail Notifier for email alerts
Since I use Gmail as mail solution for my day-to-day work this is a great add-on. It supports multiple accounts.

Delicious Bookmarks for keeping your bookmarks in one place
Since I use multiple desktops, laptops and other devices I hate to have separate bookmarks for each of them. delicious is a great way to keep them in one place. You can keep private bookmarks if you want.

Zend Framework + SVN + ZF Tools on CentOS part 2

This is the 2nd part of my attempt to write a tutorial about using svn and ZF to create a working environment for a small team of developers. It assumes you have followed the instructions provided here.

The following notations will be used in this part:
project is the name of your project, wherever you see project written with italics replace it with your actual project name. It should be one word.
developer is the name of the developer that is part of the team working on this project. For example john. is the name of your domain, replace it with the real name.
A # in front of the line means you have to execute those commands as root, while $ means you have to be a normal user.

1. Create the repository for the project

# mkdir -pv /var/svn
# svnadmin create /var/svn/project

2. Create project layout

# cd /tmp
# mkdir project
# cd project
# mkdir branches tags trunk

If you want to create a standard zf project:

# cd trunk
# zf create project
# ls

ATTENTION: zf create project is a command, so do not replace the word project.
You should have the standard structure now for a Zend Framework project.

3. Import the project files to repository

# svn import /tmp/project file:///var/svn/project -m "initial import"
# chown -R apache:apache /var/svn/project

4.1 Creating a user for the developer

# adduser -g users developer
# passwd developer

Repeat the above steps for each developer you want to add.

4.2 Creating a user for the project

# adduser project
# passwd project

5.1 Add a virtual host for each developer in apache conf file

You will have to figure out where your virtual hosts are defined in apache conf files. Most likely you can add the following lines to /etc/http/conf/httpd.conf

# developer sandbox
<VirtualHost *:80>
DocumentRoot /home/developer/www
ErrorLog /home/developer/logs/error_log
CustomLog /home/developer/logs/access_log combined
<Directory "/home/developer/www/">
Options Indexes FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all

5.2 Add a virtual host for the project

# project sandbox
<VirtualHost *:80>
DocumentRoot /home/project/www
ErrorLog /home/project/logs/error_log
CustomLog /home/project/logs/access_log combined
CustomLog /home/project/logs/svn_logfile "%t %u %{SVN-ACTION}e" env=SVN-ACTION
<Directory "/home/project/www/">
Options -Indexes FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
<Location /svn>
Options +Indexes
DAV svn
SVNParentPath /var/svn
SVNPathAuthz off
SVNIndexXSLT "/svnindex.xsl"
Require valid-user
AuthType Basic
AuthName "Subversion repository"
AuthUserFile /var/svn/project/conf/passwd

* Depending on your DNS settings you may have to manually add the needed records for to properly work.

7.1 Checking out to dev boxes

# su - developer
$ mkdir svn
$ cd svn
$ svn checkout .
$ cd ..
$ rm www
$ ln -s /home/developer/svn/public www

7.2 Exporting the latest version of the project

# su - project
$ mkdir svn
$ cd svn
$ svn export . --force
$ cd ..
$ rm www
$ ln -s /home/project/svn/public www

See the project page at

Next time you want to update the page remove the svn directory, and re-export it as above.

8.1 Working as a developer

To update you dev box to latest version:

$ cd svn
$ svn up

Whenever you add a NEW file/directory to the project use:

$ svn add filename

Of course you replace filename with the real name of the file. The reverse of this is svn del.

When you are satisfied with your changes don’t forget to commit:

$ svn commit -m "something meaningful for that idiot project manager"

8.2 Working as a project manager(?)

$ su - project
$ rm -rf svn

Repeat the steps from 7.2
Check logs for svn commits at /logs/svn_logfile

<< EOF

Scalr errors after install

After installing Scalr and adding a client I’ve tried to add an application to test out my setup. But at the second step I’ve got an alert saying:

Error Type: LoadXML
Description: Incorrect XML

A quick look at the apache log revealed the problem:

File does not exist: /var/scalr/app/www/farm_amis.xml

I thought that maybe I’ve missed a file so I did a svn checkout of scalr repository and tried to find the specified file:

apt-get install subversion
svn checkout scalr-read-only
find scalr-read-only -name farm_amis.xml -type f

Nothing came up. Weird. After a bit of looking around I’ve found a file called farm_amis_xml.php. So in fact farm_amis.xml was only a mod_rewrite directive.

Time to fix it, enable mod_rewrite and .htaccess files for apache2:

Edit /etc/apache2/sites-available/000-default and change the lines containing

AllowOverride None


AllowOverride All

Go to /etc/apache2/mods-enabled and execute the following command:

ln -s ../mods-available/rewrite.load

Restart apache2 server and everything should be ok:

/etc/init.d/apache2 restart

How to install Scalr on Ubuntu 8.10 EC2 Instance


If Amazon EC2 doesn’t ring a bell to you, chances are that you are looking at the wrong page to find solutions for your problems. EC2 stands for Elastic Compute Cloud and it’s a service offered by Amazon. I will not enter in details about the advantages of using it, since this is not the scope of this post. You can read more about it here:

Scalr is a fully redundant, self-curing and self-scaling hosting environment utilizing Amazon’s EC2. You basically can build farms of Amazon’s instances that can do load balancing using nginx, serve web pages using Apache 2, use MySQL master-slave servers or maybe you may want to define your own roles.

The beauty of this is that you don’t have to monitor the health of your server infrastructure, scalr will do it for you. If a node type gets overloaded scalr will launch another instance to spread the load and the cluster will be reconfigured.


Generate a new key for scalr instance:

ec2-add-keypair scalr-keypair > id_rsa-scalr-keypair

Edit id_rsa-scalr-keypair so it begins with


and is terminated with


Make sure you have the correct permissions for this key:

chmod 600 id_rsa-scalr-keypair

If everything went ok you should see your new key when executing


Choosing the right AMI:

For the instance we will be using ami-7806e211 which is an AMI containing a base install of Ubuntu 8.10 Intrepid Ibex Release. More details here.

Start the instance:

ec2-run-instances -z us-east-1a -k scalr-keypair ami-7806e211

You will get some output, look for the line that begins with INSTANCE and write down somewhere the id of the instance (i-XXXXXXXX) and the address of the instance( The status of your instance should be pending.

Check in a couple of minutes the status of your instance:

ec2-describe-instances i-XXXXXXXX

When the status is running it means that your instance is ready for work. You should have at least ssh and web ports open(22 and 80). If you are not sure execute the following commands:

ec2-authorize default -p 22
ec2-authorize default -p 80

Now connect to your instance using ssh:

ssh -i id_rsa-scalr-keypair -v

First time you connect you will be asked if

Are you sure you want to continue connecting (yes/no)?

Type yes and you should be the happy owner of a fresh Ubuntu Intrepid Ibex instance.

Update your system now:

apt-get update
apt-get upgrade

After the update is completed, logout and reboot your instance:

ec2-reboot-instances i-XXXXXXXX

Installing required software:

Reconnect to your instance and install MySQL server and php extensions:

apt-get install bind9 mysql-server mysql-client apache2 php5-cli libapache2-mod-php5 php5-mysql php5-mcrypt php5-mhash

When you install MySQL server you will be prompted to setup a password for the root account. Don’t forget it, you will need it. Also you will have to restart Apache2 server after you finish installing everything, like this:

/etc/init.d/apache2 restart

You could also download their php script that checks if your system has all the prerequisites

mv checkenvironment.php /var/www/
chmod a+r /var/www/checkenvironment.php

Now point your browser to and see if everything it’s ok.

Most likely you will get only these 2 errors:

• Cannot find SSH2 functions. Make sure that SSH2 Functions enabled.
• Cannot find SNMP functions. Make sure that SNMP Functions enabled.

Here is how to quick fix it:
Adding SSH2 support to PHP5, better known as: why don’t we have php5-ssh2?

apt-get install php5-dev php-pear libssh2-1 libssh2-1-dev

Thought it will be easy? Not so quick. Try to install it with:

pecl install ssh2 "channel://"

I got an error saying:

ERROR: `make' failed

Great! Let’s fix that stupid error. Edit the file /tmp/pear/download/ssh2-0.10/ssh2.c and replace the line containing:

#if LIBSSH2_APINO < 200412301450


#if false

Go to directory /tmp/pear/download/ssh2-0.10/ and compile the stuff manually:

make && make install
echo >> /etc/php5/apache2/php.ini

I don’t get it why they don’t fix this thing. A lot of people are having this problem and are complaining!

Luckily for you and me snmp is a breeze, it is already in repositories:

apt-get install php5-snmp

Restart apache server and check now if you have all the required extensions for scalr. You should have them.

Getting the latest version of Scalr:

At the time of writing this article latest version was 1.0 RC2
Go to Scalr download page and copy the link to the latest release. Download it using wget:


Extract it:

tar zxvf scalr-1.0RC2.tar.gz

Create database for scalr and import the sql:

mysqladmin -p create scalr
mysql -p scalr < scalr/sql/database.sql

Put the scalr application in /var/scalr and change permissions as suggested in the spartan documentation of scalr:

mkdir /var/scalr && cp -R scalr/* /var/scalr/
chmod 777 -R  /var/scalr/app/cache /var/scalr/app/cron/ /var/scalr/app/etc/.passwd

ATTENTION: I’m not planing on using this instance for anything else except scalr. Also this is a clean install so I don’t have anything of interest in /var/www. Read carefully the following first line:

rm -rf /var/www/
ln -sf /var/scalr/app/www /var/
chmod a+rX -R /var/www

Edit the file /var/scalr/app/etc/config.ini and update it to your values:

driver=mysql ;Actually mysql is the only option here - mysqli doesn't support nconnect(), which is essential for PCNTL (which is essential for crobjobs)
host = "localhost"
name = "scalr"
user = "root"
pass = "*YOUR PASS HERE*"

TO DO: make a mysql user for scalr.

Put your EC2 access certificate into /var/scalr/app/etc/cert-XXXXXXXXXXXX.pem
Put your EC2 private key into /var/scalr/app/etc/pk-XXXXXXXXXXXX.pem
ATTENTION: This part is a bit tricky. If you don’t put the right settings you will not be able to start instances. I warned you!
Login to Amazon AWS and go to Home->Your Account->Access Identifiers
Go to and login with admin/admin
Go to Settings->Core Settings. Modify the following fields:
Password: duh!!! change it!
Email: your email address here
Account ID: AWS Account Number, it’s called Account Number in AWS and it’s top right. Remove the ‘-‘ from the number
Key Name:Scroll down in AWS untill you see Your X.509 Certificate: Copy everything between ‘cert-‘ and ‘.pem’. Also XXXXXXXXXXXXXX is the string in the filenames of cert-XXXXXXXXXXXXX.pem and pk-XXXXXXXXXXX.pem. If they don’t match you will have problems.
Access Key:Look for Your Secret Access Key: in AWS and click on Show. Copy the string.
Access key ID: is Your Access Key ID: in AWS.

Hit save.

setting crontab:

Type crontab -e and add the following lines to cron:

* * * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --Poller
1 1 * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --RotateLogs
*/15 * * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --MySQLMaintenance
*/6 * * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --DNSMaintenance
*/3 * * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --DNSZoneListUpdate
*/2 * * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --DBQueueEvent
*/11 * * * * /usr/bin/php -q /var/scalr/app/cron/cron.php --Cleaner

You are done. I hope.

How to terminate the instance:

I thought to write down instructions on how to terminate an instance. You should know how, but just in case, here it is how to stop the instance forever and not pay for it anymore. ATTENTION: terminate will really delete the instance so there is no way you can reconnect to it or recover it. Double check what instance you terminate!

ec2-terminate-instances i-XXXXXXXX

Zend Framework + SVN + ZF Tools on CentOS

This first part focuses on installing svn + zf library + zf tools on your dev server. In the second part (coming soon) I will show you how to create a svn repository and import into it a simple zf project created with zf tools.

You will need at least a working web server (Apache2) and PHP version 5.

I’ll be using utterramblings repository to install subversion and required packages for the apache server.

Import the gpg key for utterramblings repository:

rpm –import

Add the repository to yum by creating a repo file in /etc/yum.repos.d/utterramblings.repo that contains the following lines:

name=Jason’s Utter Ramblings Repo

Install subversion and mod_dav_svn from utterramblings:

yum install subversion –enablerepo=utterramblings
yum install mod_dav_svn –enablerepo=utterramblings

You should have everything you need to start working with svn on your server.

Now let’s install ZF tools to the server:

mkdir ZF_Tool
cd ZF_Tool/
svn checkout .

Now copy the directory ‘library/ZendL’ to a place that’s in the include path of your php. In my case it was ‘/usr/share/php’.

Copy ‘bin/’ and ‘bin/zf.php’ to /bin and edit updating ZF_BIN_PHP variable to:


Don’t forget to change their permissions so anyone can use them:

chmod a+rx /bin/zf*

For ZF library I’ve used the minimal package since it contains most of the stuff I use anyway without being bloated. At the time of writing this article 1.6 was the latest version, which I’ve got it from their site using wget:

tar zxvf ZendFramework-1.6.2-minimal.tar.gz

Copy the directory ‘library/Zend’ to the same place where you’ve put ZendL directory (‘/usr/share/php’ for me).

Now if everything went ok when you type at the cli ‘zf show version’ you should get something like ‘Zend Framework Version: 1.6.2’.

Congratulations you are done with this part. If you want you can play around with ‘zf create project’ until I publish my next article showing how to use zf tools + SVN together to create the bases of a project.

Updating to Ubuntu Intrepid Ibex

Last weekend I’ve decided to update my netbook, an MSI Wind U100 clone labeled Advent, to the latest Ubuntu version (Intrepid Ibex). While on theory everything should have been simple, the reality is I had a ton of problems.

So lets start with updating from Ubuntu 8.04 to Ibex. First you should backup your home directory to an USB stick. Also copy xorg.conf to a safe place in case things go wrong.

Start a terminal in graphic mode (press alt+f2 and type xterm). Become root:

sudo su

Type your password and when you get the root prompt (#) type:

update-manager -d

You should get a new window like this one

Press “Upgrade” and answer the questions. This was the easy part, after update-manager would have finished running I should have been a happy user running Ubuntu Intrepid, right? Wrong!

I went out to town to drink some beer since it was weekend and left the update-manager to do its job. When I got back home, surprise: I didn’t plug the netbook to a power source and battery was empty, netbook offline. Nice … Pressed the power on button and crossed fingers.

After 3-4 minutes the reality unveiled, upgrade didn’t finished OK so netbook wasn’t entering graphic mode and I’ve got a ton of failed services and errors. A quick look showed that the file system was mounted read-only. Time to repair all that mess as I didn’t want to reinstall the whole system because I’m a lazy person.

Recovering after disaster

Ctr+Alt+F1 to get to first text console. Login with your username and password. Become root:

sudo su

As root remount the file system so it can be used:

mount / -o remount

If you have more partitions do so for each of them (replace / with their mount point).
Switch to run level 3:

telinit 3

When you are running in run level 3 type:

dpkg –configure -a

This should restart/fix the upgrade process from where it stopped. It will ask you if you want to replace your custom config files with new ones. I answered No. When it’s done restart the netbook (use telinit 6). Make it shutdown even if you have to switch it off from power button.

When it’s online again you should have a working graphic mode. If not, try to replace your xorg.conf with your backup and restart gdm with:

/etc/init.d/gdm restart

Probably you won’t be having the wireless working so plug a network cable and manually reconfigure your NIC:

sudo ifconfig eth0
sudo route add default gw

In the above example is netbook IP address, /24 is the netmask (equivalent to and is the gateway (the IP of router)

Run the following commands as root to fix all the missing dependencies:

apt-get update
apt-get -f install
apt-get upgrade

Restart the system and everything should be ok now.