Category Archives: Debian

All things Debian.

Slow backup on Proxmox using vzdump

I was recently attempting to backup one of our Proxmox VE’s using OpenVZ’s backup tool ‘vzdump’. In the past when using vzdump, a complete backup of a 100GB VE, for example could be obtained in under and hour or so. This time however, after leaving the process running and returning several hours later, the .tar file was a mere 2.3GB in size.

At first I thought that there might be an issue with one or more nodes in the shared storage cluster, so I decided I would direct vzdump to store the .tar file on one of the server’s local partitions instead. Once again I started the backup, returned several hours later, only to find a file similar in size to the previous one.

Next I decided I would attempt to ‘tar up’ the contents of the VE up manually, that combined with the ‘nohup’ command would allow me to find out at what point this whole process was stalling.

As it turns out, I had thousands of files in my ‘/var/spool/postfix/incoming/’ directory on that VE, and although almost every single file in that directory was small, and the overall directly size was not large at all, the result was that file operations inside that folder had come to a screeching halt.

Luckily for me, I knew for a fact that we did not need any of these particular email messages, so I was simply able to delete the ‘incoming’ folder and then recreate it once all the files had been removed, after that, vzdump was once again functioning as expected.

Connecting to Mssql database servers using PHP on Linux

I recently had the pleasure(!) of trying to get PHP on Debian working correctly with a Microsoft SQL server so that the data could be migrated from a Mssql instance into a Mysql one.

Previous to this attempt, the developers were using a Windows machine as a ‘broker’ between the two database. This setup was much too slow for importing and exporting large amounts of data,  so we decided to cut out the middle man (the Windows machine) and do all the processing on a single server.

First I needed to install a few prerequisite packages:

apt-get install unixodbc-dev
apt-get install libmysqlclient15-dev

Next we need to download and uncompress the FreeTDS source code:


Next we use configure and install FreeTDS with the following options:

./configure --enable-msdblib --prefix=/usr/local/freetds --with-tdsver=7.0 --with-unixodbc=/usr
make install

Next we need to download and uncompress the PHP source code:


Next we use configure and install PHP with the following options:

./configure  --with-mssql=/usr/local/freetds --with-mysql --with-mysqli
make install

Lastly we will need to create and install the mssql module for PHP:

cd ext/mssql
./configure --with-mssql=/usr/local/freetds
make install

Now you should be able to connect to any Microsoft SQL (and Mysql) server from PHP using the functions found here.

Proxmox 2.0 feature list

Martin Maurer sent an email to the Proxmox users mailing list detailing some of the features that we can expect from the next iteration of Proxmox VE. Martin expects that the first public beta release of the 2.x branch will be ready for use sometime around the second quarter of this year.

Here are some of the highlights currently slated for this release:

  • Complete new GUI
    • based on Ext JS 4 JavaScript framework
    • fast search-driven interface, capable of handling hundreds and probably thousands of VM´s
    • secure VNC console, supporting external VNC viewer with SSL support
    • role based permission management for all objects (VM´s, storages, nodes, etc.)
    • Support for multiple authenication sources (e.g. local, MS ADS, LDAP, …)
  • Based on Debian 6.0 Squeeze
    • longterm 2.6.32 Kernel with KVM and OpenVZ as default
    • second kernel branch with 2.6.x, KVM only
  • New cluster communication based on corosync, including:
    • Proxmox Cluster file system (pmcfs): Database-driven file system for storing configuration files, replicated in realtime on all nodes using corosync
    • creates multi-master clusters (no single master anymore!)
    • cluster-wide logging
    • basis for HA setup´s with KVM guests
  • RESTful web API
    • Ressource Oriented Architecture (ROA)
    • declarative API definition using JSON Schema
    • enable easy integration for third party management tools
  • Planned technology previews (CLI only)
    • spice protocol (remote display system for virtualized desktops)
    • sheepdog (distributed storage system)
  • Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base
    • Topics for future releases
      • Better resource monitoring
      • IO limits for VM´s
      • Extend pre-built Virtual Appliances downloads, including KVM appliances

    Native Linux ZFS kernel module goes GA.

    UPDATE: If you are interested in ZFS on linux you have two options at this point:

    I have been actively following the  zfsonlinux project because once stable and ready it should offer superior performance due to the extra overhead that would be incurred by using fuse with the zfs-fuse project.

    You can read more about using zfsonlinux in another one of my posts here.

    Earlier this week  KQInfotech released the latest latest build of their ZFS kernel modules for Linux. This version has been labeled GA and ready for wider testing (and maybe ready for production).

    KQStor has been setup as a place where you can go to sign-up for an account, download the software and get additional support.

    The source code for the module can be found here:

    Currently mounting of the root filesystem is not supported, however a post here, describes a procedure that can be used to do it.

    The users guide also hints at possible problems using ‘zfs rollback’ under certain circumstances.  I have asked for more specific information on this issue, and I will pass along any other information I can uncover.

    After looking around the various mailing lists, this looks like it might be an issue that exists with zfs-fuse, and thus the current version of the kernel module as well, since they share a lot of the same code.

    Installation and usage:

    Installation of the module is fairly simple, I downloaded the pre-packaged .deb packages for Ubuntu 10.10 server.

    root@server1:/root/Deb_Package_Ubuntu10.10_2.6.35-22-server# dpkg -i *.deb

    If all goes well you should be able to list the loaded modules:

    root@server1:/root/Deb_Package_Ubuntu10.10_2.6.35-22-server# lsmod |grep zfs
    lzfs                   36377  3
    zfs                   968234  1 lzfs
    zcommon                42172  1 zfs
    znvpair                47541  2 zfs,zcommon
    zavl                    6915  1 zfs
    zlib_deflate           21866  1 zfs
    zunicode              323430  1 zfs
    spl                   116684  6 lzfs,zfs,zcommon,znvpair,zavl,zunicode

    Now I can create a test pool:

    root@server1:/root#zpool create test-mirror mirror sdc sdd

    Now check the status of the zpool:

    root@server1:/root# zpool status
    pool: test-mirror
    state: ONLINE
    scan: none requested
    test-mirror  ONLINE    0     0     0
    mirror-0  ONLINE       0     0     0
    sdc1   ONLINE          0     0     0
    sdd1   ONLINE          0     0     0

    Modifying ethernet interface order in Debian

    Sometimes after a fresh Debian Etch install  (I am not sure if this is fixed yet in Lenny or not), the order of your ethernet interfaces will be incorrect. You may also be in a position where you have more then two NIC cards and and you wish to swap eth0 and eth1 with eth2 and eth3, for consistency purposes for example.

    In order to do so, you’ll need to make a change to the udev configuration file which controls which interfaces receive which names. You need to edit the following file:


    Simply make sure that you match the proposed interface with the correct MAC address and you are all set.

    Go ahead and restart the server and you should be all set with the correctly labeled interfaces.

    Poor LSI SAS1068E Write Performance with Linux

    While doing research into poor write performance with Oracle I discovered that the server was using the LSI SAS1068E. We had a RAID1 setup with 300GB 10K RPM SAS drives. Google provided some possible insight into why we the write performance was so bad(1 2). The main problem with this card is that there is no  battery backed write cache. This means that the write-cache is disabled by default. I was able to turn on the write cache using the LSI utility.

    This change however did not seem to any difference on performance.  At this point I came to the conclusion that the card itself is the blame.  I believe  that this is an inexpensive RAID card that is good for general use of RAID0 and Raid1, however for anything were write throughput is important, it might be better the spring for a something a little bit more expensive.

    When it was all said and done we ended up replacing all the these LSI cards with Dell Perc 6i cards.  These cards did come battery backed…which allowed us to then enable the write cache, needless to say the performance improved significantly.