Getting Pydio to access a Ceph S3 backend

I’ve been experimenting around with Ceph lately and wanted to hook up a web based front end. A quick google search yielded ownCloud and Pydio. Since ownCloud advertises the S3 backend availability only for the Pro version I decided to give Pydio a go.

Unfortunately this was a bit fraught with difficulties so I just wanted to document the various errors here in case someone else is running into this.
Note that these are just some quick steps to get up and running, you should review file permissions and access rights when installing this on a production server!

The following steps were all executed on a Ubuntu 16.04.01 distribution which ships with PHP 7.0 and Apache 2.4.18

Installing Pydio

Since the community edition only has trusty packages as far as I could find I downloaded the tar archive (version 6.4.2 was at this point) and installed a new site into Apache:

wget https://download.pydio.com/pub/core/archives/pydio-core-6.4.2.tar.gz
tar -xzf pydio-core-6.4.2.tar.gz
sudo mkdir /var/www/pydio
sudo mv pydio-core-6.4.2/* /var/www/pydio
sudo chown -R root:root /var/www/pydio/
sudo chown -R www-data:www-data /var/www/pydio/data

Creating an apache config

sudo vim /etc/apache2/sites-available/pydio.conf

Put this as content:

Alias /pydio "/var/www/pydio/"
<Directory "/var/www/pydio">
  Options +FollowSymLinks
  AllowOverride All

  SetEnv HOME /var/www/pydio
  SetEnv HTTP_HOME /var/www/pydio

Make the site available and restart Apache

cd /etc/apache2/sites-enabled
sudo ln -s ../sites-available/pydio.conf pydio.conf
sudo service apache2 restart

At this stage you should be able to access you Pydio install on a browser via http://serverip/pydio
Pydio has an install wizard which will guide you through setting up an admin user and the database backend (for testing you could just go with SQLite otherwise you will have to setup a Postgres or MySQL database and an associated pydio user)

Hooking up to the Ceph S3 backend

Pydio organizes files into workspaces and there is a plugin for an S3 backed workspace which ships out of the box.
So the next step is to log into Pydio as admin user and make sure the access.S3 plugin is activated. You will probably see an error complaining about the AWS SDK not being installed, so that needs to happen first:

cd /var/www/pydio/plugins/access.s3
sudo -u www-data wget http://docs.aws.amazon.com/aws-sdk-php/v2/download/aws.phar

Since radosgw (the S3 interface into Ceph) only supports v2 signature (at this time 10.2.2 Jewel was current) you cannot use the v3 SDK.
Now the plugin should be showing status OK. Double click it and make sure it uses SDK Version 2
Next step is to create a new workspace and selecting the S3 backend as the storage driver.

  • For Key and Secret Key use the ones created for your user (how to create radosgw users for S3 can be looked up on the internet)
  • Region use US Standard (not sure if it really matters)
  • Container is the bucket you want all the files for this workspace to be stored in. Pydio won’t create the bucket for you so you’ll have to create it with another S3 capable client
  • Signature version set to Version 2 and API Version to 2006-03-01
  • Custom Storage is where you can point to your local radosgw instance, Storage URL is the setting you need for that. You should put in the full URL including protocol, e.g. http://radosgw-server-ip:7480/ (assuming you’re running radosgw on the default port which is 7480 with Jewel release)
  • I’ve disabled the Virtual Host Syntax as well since I’m not sure yet how to make this work.
  • Everything else I’ve left on default settings.

Now the fun begins. Here is the first error message I encountered when trying to access the new workspace:

Argument 1 passed to Aws\S3\S3Md5Listener::__construct() must implement interface Aws\Common\Signature\SignatureInterface, string given

Some quick google seemed to suggest a client written for SDK v3 was trying to use SDK v2, so I started trialing all the combinations combinations of plugin settings and SDKs but I only mostly got HTTP 500 errors which left no trace in any of the logfiles I could find.
Another error I encountered during my experiments was:

Missing required client configuration options:   version: (string)
A "version" configuration value is required. Specifying a version constraint
ensures that your code will not be affected by a breaking change made to the
service. For example, when using Amazon S3, you can lock your API version to
"2006-03-01".
Your build of the SDK has the following version(s) of "s3": * "2006-03-01"
You may provide "latest" to the "version" configuration value to utilize the
most recent available API version that your client's API provider can find.
Note: Using 'latest' in a production application is not recommended.
A list of available API versions can be found on each client's API documentation
page: http://docs.aws.amazon.com/aws-sdk-php/v3/api/index.html.
If you are unable to load a specific API version, then you may need to update
your copy of the SDK

I downgraded to PHP 5.6 to rule out any weird 7.0 incompatibilities which got me a little bit further so I thought that was a problem but ultimately it boiled down to the way how the backend configures the S3 client. In /var/www/pydio/plugins/access.S3/class.s3AccessWrapper.php changing

if (!empty($signatureVersion)) {
    $options['signature'] = $signatureVersion;
}

to

if (!empty($signatureVersion)) {
    $options['signature_version'] = $signatureVersion;
}

kicked everything into life. Not sure if that’s due to a recent change in the v2 SDK (current at this point was 2.8.31) or something else. Looking through the Pydio forums it seems like they tested access to a Ceph S3 backend successfully – so who knows.

Next is trying to make it connect to a self-signed SSL gateway.

Migrating reviewboard from MySQL to PostgreSQL

This is for Ubuntu 12.04, it may vary slightly for other distributions.

  1. Install postgres and libpq-dev (required for django backend)
    sudo apt-get install postgresql libpq-dev
  2. Install psycopg2
    sudo easy_install psycopg2
  3. Create the reviewboard database in postgres and a user with access to it.
    sudo su postgres -c psql
    postgres# CREATE ROLE myuser WITH SUPERUSER;
    postgres# CREATE DATABASE reviewboard WITH OWNER myuser;
    postgres# ALTER ROLE myuser WITH PASSWORD 'secret';
    postgres# \q
  4. Stop apache and any other service which might modify the original database
    sudo service apache2 stop
    sudo service mysql stop

    Note that stopping the mysql deamon might be a little bit drastic it will affect all databases running on that server. In my case reviewboard was the only database soI did it as a precaution.

  5. Dump the original reviewboard database (from MySQL)
    sudo rb-site manage /var/www/yourcodereviewsite dumpdb > reviewboard.dump

    Note that this can take several hours depending on the size.

  6. Edit your local reviewboard config to use Postgres instead of MySQL
    vim /var/www/yourcodereviewsite/conf/settings_local.py

    → change the django backend from mysql to postgresql_psycopg2

  7. Create the reviewboard table structures in the Postgres db
    sudo rb-site manage /var/www/yourcodereviewsite syncdb
  8. Clean default data inserted by the rb-site command (will interfere with loaddb otherwise)
    sudo su postgres -c psql
    postgres# TRUNCATE django_content_type CASCADE;
    postgres# TRUNCATE scmtools_tool CASCADE;
    postgres# \q
  9. Load the MySQL database dump
    sudo rb-site manage /var/www/yourcodereviewsite loaddb reviewboard.dump
  10. Cleanup some database meta data as per https://groups.google.com/forum/#!topic/reviewboard/Ehv0JwthROg:
    psql -t reviewboard -c "SELECT E'select setval(\'' || c.relname || E'\', (select max(id)+1 from ' || replace(c.relname, '_id_seq', '') || '), false);' FROM pg_class c WHERE c.relkind = 'S';" | psql reviewboard
  11. Restart apache
    sudo service apache2 start

Migrating from subversion to mercurial

Note: The below was in draft for quite some time. We actually moved to git so I didn’t follow this to it’s ultimate conclusion. I effectively aborted this after the conversion to hg consumed the 220GB of available disk space before it manage to completely convert the entire svn repo. I didn’t bother increasing the disk space since lugging around a 220+GB repo wasn’t in any way practical.

In any case some of the following may proof useful so I’m publishing it as is and as far as I got.


We are currently looking at migrating our subversion repository to mercurial including all the history and for some reason this seemed harder than it was in the end. Maybe this post will help someone out, so here you go:

Our repository has close to 25,000 revisions and checked out is approx. 1GB in size. Most ways of converting it recommend to create a local copy of your repository with svnsync first so this is what I did (on a windows machine):

First I installed TortoiseHg from the mercurial website (the all in one 64bit installer) and TortoiseSvn with the command line tools.

Creating a local subversion clone:

cd C:\
mkdir repos
cd repos
svnadmin create software-mirror
echo 'exit 0' > software-mirror/hooks/pre-revprop-change.bat
svnsync init file:///c:/repos/software-mirror svn://myserver/software
svnsync sync file:///c:/repos/software-mirror

This took about 4h. Interestingly I also did this on a Linux machine running Ubuntu 12.10 and it only took half the time (same VM hardware specs, same network, same VM server).

Now we can go on to convert the repository. When searching the Internet the first way of doing it I came across was the convert extension. So, open TortoiseHg, enable the convert extension and run:

cd C:\repos
hg convert software-mirror

Now after 48h of running it managed to convert 2,000 revisions, the process was using 2.8GB of RAM (with peaks at 3.5GB) and has created close to 2 million(!) files – WTF? So convert: FAIL.

I did some more research on the web during that time and came across multiple posts saying that converting large repositories with convert might not be so good as it a) might do the wrong thing (i.e. wrong commits on the wrong branches) and b) might fail anyway with an out of memory exception (although on a 64bit system it seems like it might just go into swap hell at some point). The alternative suggested was hgsubversion.

hgsubversion needs to be installed separately as it is not bundled but that proved fairly painless even on Windows:

cd C:\
mkdir hgext
cd hgext
hg clone http://bitbucket.org/durin42/hgsubversion hgsubversion

And add the extension to your mercurial.ini. For Windows 7+ (probably even Vista+) this should be located under C:\Users\youruser\:

[extensions]
hgsubversion = C:\hgext\hgsubversion\hgsubversion

Now we should be able to clone a subversion repository as mercurial repository. I combined the suggestion from with the suggestion from and cloned the first revision and then use pull to load the remainder of the revisions:

cd c:\repos
hg clone -r1 --config hgsubversion.defaulthost=mycompany.com file:///c:/repos/software-mirror software-hg
cd software-hg
hg pull

Our subversion usernames can be easily mapped to email addresses as username@mycompany.com hence the defaulthost setting. The pull made good and fast progress (took only 5min to pull the first 2,000 revision compared to the 48h for the convert extension). Unfortunately after 7,500 revision the pull failed with “trying to open a deleted file”. Huh? The revision in question was a tag which was no different from all the other tags (our build machine automatically tags all builds). Now I don’t really care about that specific tag but unfortunately there is no way to instruct hg pull to skip this revision. So hgsubversion: FAIL.

Now, what other options do we have? I guess I could try to skip the revision in question when doing the svnsync in the first place but I decided to try something else: There is this fast-import format which seems to be emerging as a repository independent exchange format. So why not do it this way?

Unfortunately there does not seem to be a good tool around which create fast-import dumps from a subversion repository. Here is what I looked at:

  1. svnadmin and svnrdump do not produce dumps file in the correct format.
  2. There is a tool in the bazaar tool chain which supposedly can do this: . Every piece of documentation claims that there is a frontend for subversion you should able to use like this bzr fast-export-from-svn but I could not get it to work. All I ever got was “there is no such command” (while bzr help fast-export would show something meaningful from the documentation it states that this is to generate fast-import streams from a bazaar repository). All the documentation says that the frontends are in the “exporters” subdirectory of the plugin but there is no such subdirectory (try bzr branch lp:bzr-fastimport fastimport yourself and check). So in short: I could not get this to work.
  3. There is a tool for migrating from subversion to git: . I switched to a Linux machine at this point as most instructions are for that and I could not get most of the tools working under windows. Unfortunately it died with a segfault on importing the second revision.
  4. Another tool which supposedly can do the job: but I haven’t tried that yet.

After not really getting anywhere I followed a hunch: On the intial svnsync the Windows 8 VM I used to test all of this went to sleep several times due to the default power settings of Windows 8. So I killed the software-mirror and ran svnsync again – this time on a Linux VM (as for some reason svn tools performance seems to be much better under Linux) and made sure it run uninterrupted. Then I used hgsubversion again and it got passed the revision it spewed up earlier – hmm, weird.

At some point I realized that Ubuntu 12.10 ships with Mercurial 2.2 while on Windows I used 2.5 with the latest hgsubversion clone. After upgrading to Mercurial 2.5 and checking out the latest hgsubversion from bitbucket I ran into the same “trying to open a deleted file” problem at the same revision again. Coincidentally a little while later someone posted a bug report of exactly this problem.

Anyway, I continued the pull with Mercurial 2.2 and everything seemed fine until it got to approx. 18,000 revisions (which took close to 20h and Mercurial ballooned to 5.5GB of memory usage) where it failed with an AssertionError in subvertpy:

** unknown exception encountered, please report by visiting
**  http://mercurial.selenic.com/wiki/BugTracker
** Python 2.7.3 (default, Sep 26 2012, 21:51:14) [GCC 4.7.2]
** Mercurial Distributed SCM (version 2.2.2)
** Extensions loaded: fastimport, hgsubversion
Traceback (most recent call last):
  File "/usr/bin/hg", line 38, in <module>
    mercurial.dispatch.run()
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 27, in run
    sys.exit((dispatch(request(sys.argv[1:])) or 0) & 255)
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 64, in dispatch
    return _runcatch(req)
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 87, in _runcatch
    return _dispatch(req)
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 696, in _dispatch
    cmdpats, cmdoptions)
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 472, in runcommand
    ret = _runcommand(ui, options, cmd, d)
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 786, in _runcommand
    return checkargs()
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 757, in checkargs
    return cmdfunc()
  File "/usr/lib/python2.7/dist-packages/mercurial/dispatch.py", line 693, in <lambda>
    d = lambda: util.checksignature(func)(ui, *args, **cmdoptions)
  File "/usr/lib/python2.7/dist-packages/mercurial/util.py", line 463, in check
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/mercurial/extensions.py", line 139, in wrap
    util.checksignature(origfn), *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/mercurial/util.py", line 463, in check
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/wrappers.py", line 538, in generic
    return orig(ui, repo, *args, **opts)
  File "/usr/lib/python2.7/dist-packages/mercurial/util.py", line 463, in check
    return func(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/mercurial/commands.py", line 4458, in pull
    modheads = repo.pull(other, heads=revs, force=opts.get('force'))
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/svnrepo.py", line 76, in wrapper
    return fn(self, *args, **opts)
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/svnrepo.py", line 99, in pull
    return wrappers.pull(self, remote, heads, force)
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/wrappers.py", line 358, in pull
    firstrun)
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/replay.py", line 67, in convert_rev
    svn.get_replay(r.revnum, editor, meta.revmap.oldest)
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/svnwrap/subvertpy_wrapper.py", line 422, in get_replay
    self.remote.replay(revision, oldestrev, AbstractEditor(editor))
  File "/usr/lib/python2.7/dist-packages/hgext/hgsubversion/editor.py", line 357, in txdelt_window
    handler(window)
  File "/usr/lib/python2.7/dist-packages/subvertpy/delta.py", line 84, in apply_window
    target_stream.write(apply_txdelta_window(sbuf, window))
  File "/usr/lib/python2.7/dist-packages/subvertpy/delta.py", line 57, in apply_txdelta_window
    raise AssertionError("%d != %d" % (len(tview), tview_len))
AssertionError: 473 != 474

Oh well, maybe a bug in an older version. As I was past the dreaded “trying to open a deleted file” revision I upgraded to Mercurial 2.5 and ran again – same problem. However this time there was a helpful message appended:

Your SVN repository may not be supplying correct replay deltas. It is strongly
advised that you repull the entire SVN repository using hg pull –stupid.
Alternatively, re-pull just this revision using –stupid and verify that the
changeset is correct.

Ok, lets try

hg pull -r 17890 --stupid

And it broke:

ValueError: 20-byte hash required

After some research into the issue I came across this bug report on bitbucket which essentially says: “-r doesn’t work like that with svn repositories, try url#revision instead”.

Unfortunately

hg pull file://`pwd`/software-mirror#17890 --stupid

ran into the same problem – alright, lets do it without a specific revision.

hg pull --stupid

This seems to work. So I aborted it once it got passed the bad revision and continued a normal pull (without stupid) and that got it going again.

Live resizing of an ext4 filesytem on linux

Recently I was working on a Linux VM which was running out of disk space and I wanted to increase the available space. I didn’t want to just add another drive and mount is separately but to to increase the size of the root partition.

Disclaimer: The following instructions can easily screw your data if you make a mistake. I was doing this on a VM which I backed up before performing the following actions. If you loose your data because you didn’t backup don’t come and complain.

The VM I was working on is a stock Ubuntu 12.10 Desktop install.

First: Increase the disk size.

In ESXi this is simple, just increase the size of the virtual disk. Now you have a bigger hard drive but you still need to a) increase the partition size and b) resize the filesystem.

Second: Increase the partition size.

You can use fdisk to change your partition table while running. The stock Ubuntu install has created 3 partitions: one primary (sda1), one extended (sda2) with a single logical partition (sda5) in it. The extended partition is simply used for swap, so I could easily move it without losing any data.

  1. Delete the primary partition
  2. Delete the extended partition
  3. Create a new primary partition starting at the same sector as the original one just with a bigger size (leave some for swap)
  4. Create a new extended partition with a logical partition in it to hold the swap space
me@ubuntu:~$ sudo fdisk /dev/sda

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   192940031    96468992   83  Linux
/dev/sda2       192942078   209713151     8385537    5  Extended
/dev/sda5       192942080   209713151     8385536   82  Linux swap / Solaris

Command (m for help): d
Partition number (1-5): 1

Command (m for help): d
Partition number (1-5): 2

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-524287999, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-524287999, default 524287999): 507516925

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): e
Partition number (1-4, default 2): 2
First sector (507516926-524287999, default 507516926):
Using default value 507516926
Last sector, +sectors or +size{K,M,G} (507516926-524287999, default 524287999):
Using default value 524287999

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended

Command (m for help): n
Partition type:
   p   primary (1 primary, 1 extended, 2 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (507518974-524287999, default 507518974):
Using default value 507518974
Last sector, +sectors or +size{K,M,G} (507518974-524287999, default 524287999):
Using default value 524287999

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended
/dev/sda5       507518974   524287999     8384513   83  Linux

Command (m for help): t
Partition number (1-5): 5

Hex code (type L to list codes): 82
Changed system type of partition 5 to 82 (Linux swap / Solaris)

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended
/dev/sda5       507518974   524287999     8384513   82  Linux swap / Solaris

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

me@ubuntu:~$ sudo reboot

I noticed afterwards that I didn’t set the bootable flag but apparently you don’t really need it.

Third: Enlargen the filesystem.

You can do this with resize2fs online on a mounted partition.

me@ubuntu:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        91G   86G   12M 100% /
udev            3.9G  4.0K  3.9G   1% /dev
tmpfs           1.6G  696K  1.6G   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            3.9G  144K  3.9G   1% /run/shm
none            100M   16K  100M   1% /run/user

me@ubuntu:~$ sudo resize2fs /dev/sda1
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 6, new_desc_blocks = 16
The filesystem on /dev/sda1 is now 63439359 blocks long.

me@ubuntu:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       239G   86G  142G  38% /
udev            3.9G   12K  3.9G   1% /dev
tmpfs           1.6G  696K  1.6G   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            3.9G  152K  3.9G   1% /run/shm
none            100M   36K  100M   1% /run/user

Slight catch: After rebooting the swap space wasn’t active. Turned out you need to run mkswap, adjust /etc/fstab to the new UUID and turn the swap on

me@ubuntu:~$ sudo mkswap /dev/sda5
Setting up swapspace version 1, size = 8384508 KiB
no label, UUID=141d401a-b49d-4a96-9b85-c130cb0de40a
me@ubuntu:~$ sudo swapon --all --verbose
swapon on /dev/sda5
swapon: /dev/sda5: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/sda5: pagesize=4096, swapsize=8585740288, devsize=8585741312

Edit /etc/fstab to replace the UUID for the old swap partition with the new one from mkswap.

Reviewboard on Linux and Windows Domain

At work we recently started to use ReviewBoard as code review tool. I installed it on a Ubuntu 12.04 VM as the Windows support is riddled with problems (RB has abandoned official Windows support – so it might work or it might not). Following the instructions for installing it on Linux with MySQL as database backend and using Apache as host was easy and worked pretty much out of the box. Our central repository is hosted in subversion.

Our network is controlled by a Windows Domain Controller and we wanted the ability to authenticate the ReviewBoard users via the domain login. In the following I will assume that the domain is called COMPANY.LOCAL

    • I pretty much followed these instructions except I only installed likewise-open5, likewise-open5-gui but not not winbind (which gave a weird PAM error when I tried to install it)
    • When trying to join the domain as per the above linked page I got an error which led me to this bug report on launchpad. Following the instructions to change /etc/nsswitch.conf to look like this resolved the problem:
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd: compat lsass

group: compat lsass
shadow: compat

# 04102010 Add line as per Likewise Open Admin Guide
hosts: files dns

# 04122010 Commenting out hosts below as per ubuntu bug 555525
#hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
networks: files

protocols: db files
services: db files
ethers: db files
rpc: db files

netgroup: nis
  • Reboot and then join the domain sudo -s domainjoin-cli --loglevel info --log . join COMPANY.LOCAL USER
    • Note that USER needs to be a domain user with sufficient rights to add a computer to the domain
  • For activating the support in ReviewBoard I installed python_ldap and pydns and then configured the Authentication Method in the ReviewBoard admin section to Active Directory authentication method using the following settings:
Domain: company.local
Domain controller: IpOfYourWindowsDomainController
OU: none
Group name: software
Custom search root: none
Recursion depth: -1

And that was it – now every user part of the software domain group can authenticate with their domain login.

Thoughts on RAID and NAS – Part 1

I’m currently looking into building my own NAS: basically a standard PC with a whole bunch of disks running Ubuntu or some other Linux distribution. The first things which comes to mind: “Of course I’m going to run RAID 5 on there. A lot of main boards these days support it out of the box and I get redundancy.”. Well, so I went on to start looking for hardware.

I like to keep things separate so my idea was to have a system drive and decided to try an SSD for it. A 60GB SSD from OCZ is about NZ$100 which is big enough as system drive. Also I know the mantra that “RAID is no backup” so I though I’d better put another separate disk in where I could mirror some of the more critical data on. Not an ideal backup solution (the backup medium resides in the same environment, connected to the same controller on the same mainboard and the same PSU) but oh well – can’t have everything can we.

Ok, now we have 2 disks in the system already, let’s see how many data disks we can fit in. This is apparently constrained by the case (mounting slots), the mainboard (number of SATA connectors) and the PSU (number of power connectors). With the Coolermaster Elite 371 I found a nice case for about NZ$140 which offers six 3.5″ bays and three 5.25″. Assuming that I’ll fit in a DVD drive or something similar this leaves us with up to 8 slots where HDDs can be mounted.

Then let’s go on to the mainboard. I had a look at various Intel and AMD CPU/mainboard combinations and the Asus M5A97 Evo plus an Athlon II X2 270 seemed a nice combination. The Asus offers 6x 6Gb/s SATA ports plus integrated RAID 5 and the Athlon should be up to the task required by the NAS box. For cheaper Intel CPUs which are still slightly ahead of the AMD the mainboards tend to offer less features so the AMD package in total seemed the best. That’s about NZ$270 for board + CPU.

Sweet, so this leaves us with 4 spare ports on the board for data disks. Now, 4 disks at 2TB each gives you approx. 6TB available capacity in a RAID 5 which is what I was aiming for. All sorted then. As data disks I opted for the Western Digital Green Power 2TB model which are about NZ$120 each.

Together with 4GB RAM, some case fans, CPU cooler, some decent wireless gear, a cold spare HDD and 5.25″ -> 3.5″ mounting brackets the total price of the system clocked in at just under NZ$1900 – not bad. While an of-the-shelf 4 bay NAS would have been about NZ$400-500 cheaper this solution give me quite a bit more flexibility.

All sorted then – right? Hmm, not quite. A colleague at work mentioned the bad words “Unrecoverable Read Error” (short URE) to me and I thought “Well, better check what’s that all about”. Now, as it turns out this means that every approximate 12TB of data you read of a disk an “Unrecoverable Read Error” will be reported – in other words “a bad sector”. This will cause the disk to get dropped from the RAID which then needs to be rebuilt after the bad sector has been mapped out. Does not sound so bad – right? Well, what happens when you actually have a full disk failure (lets say a head crash) and you replace the drive and then the array gets rebuilt. Now imagine your get a URE during the rebuild – not so nice. It will very likely end up in some data corruption. So I decided to ask the big gooracle and came across this article on ZDNet which gave me some things to think about (and led me to write this post).

The author makes one implicit assumption that based on a 7 disk RAID 5 array with 2TB per disk in case of a disk failure you will have to read approx. 12TB of data from the other disks and thus encounter a URE with a probability close to 1 (based on an average 12TB URE rate). I think this is invalid because the URE is per disk. And you still only need to read 2TB of each disk. Hmm, lets see if we can come with up some calculations here.

Let’s define a set of events called URE[x] which means “a URE is encountered after x TB have been read from a single disk”. Then we define the following probabilities:
P(URE[x]) = x/12 for 0 <= x <= 12
P(URE[x]) = 0 for x <= 0 (nothing read yet, extremely unlikely that we get a URE)
P(URE[x]) = 1 for x >= 12 (probability of encountering a URE after 12TB or more have been read)

This assumes that the probability for getting a URE is linear in the amount of data read which is probably not the case but make some calculations easier.
Let further be:
n – total number of disks on the array
c – capacity per disk in TB
d – total amount of data read from the array at the point of rebuilt
FAIL – the event that we get a URE while we are trying to rebuild an array which had a total disk failure

P(FAIL) is apparently the probability that at least one of the remaining (n – 1) disks has a URE while rebuilding. This is equal to one minus the probability that no drive will have a failure. The event that a single drive will have a URE at that point is URE[d/n + c] (assuming the read data is equally distributed across all disks). Therefore P(URE[d/n + c]) = ((d/n) + c) / 12 and the probability that it won’t fail is P(!URE[d/n + c]) = 1 – ((d/n) + c) / 12. Assuming that those events are independent the probability that out of (n – 1) drives none will have a URE is then: P(!URE[d/n+c])^(n-1) which means P(FAIL) = 1 – P(!URE[d/n+c])^(n-1) = 1 – (1 – ((d / n) + c) / 12)^(n-1)

Looks a bit dry, so let’s run it with some numbers. The ZDNet article stated that approximately 3% of all drives fail in the first 3 years. Let’s make some assumptions:

I plan to have 4 2TB disks in the array, prime it with about 3TB of data and then cause maybe 5GB/day of read/write traffic for the array. For simplicities sake we assume that writes affect the URE same way as reads. So that leaves us with:
n = 4
c = 2 (TB)
d = 3 (TB) + 3 * 365 * 5 / 1000 = 8.475 (TB)
Therefore P(FAIL) = 1 – (1 – (d/n + c) / 12)^(n-1) = 1 – (1 – ((2.12 + 2) / 12))^3 = 72.9%

So, if I have a drive failure after 3 years with the above mentioned setup and usage the probability of encountering a URE during the rebuild is approximately 75%. I have made a little spreadsheet to calculate the probabilities based on the main parameters: RAID 5 Probability Calculations. Playing around with the numbers shows: Increasing the number of disks (like using 7 1.5TB disks) doesn’t help. Although P(URE[x]) decreases per disk (as the load is spread) overall P(FAIL) increases due to the larger number of disks.

Only when you start going to enterprise drives with a URE of about 120TB you start dropping down to 10% probability of a failure during a rebuild. However a 600GB enterprise SAS drive currently costs about NZ$350 and you would need 14 of those to make your 8 TB array.

Lets define an event CRASH which means “A drive has a major crash and is gone for good”. Assuming that CRASH is independent for all disks in an array (which it is not but again let’s make that assumption for simplicity’s sake) then the probability that at least one drive in the array fails is 1 minus the probability that no drive fails which is 1 – P(!CRASH)^n (with n being the number of disks in the array). Assuming P(CRASH) = 0.03 then P(!CRASH) = 0.97 and for a 4 disk array 1 – 0.97^4 = 11.5%. Again assuming that CRASH and FAIL are independent the probability of having a CRASH and a FAIL is P(CRASH) * P(FAIL) = 8%. So with the above setup there is an 8% chance to have some kind of data loss during the first 3 years.

Does that mean RAID 5 is useless? Well – not quite. Just because you have a URE during a rebuild doesn’t mean that all your data is gone. However it is very likely that some of you data is now corrupted but that might be only 1 file instead of everything. It depends on your controller and OS how much pain it will be to recover from that and get your array rebuild. I think it’s potentially more trouble than it’s worth it so I’ll be looking into other alternatives to see what the odds are there.

GetType() weirdness in .NET

Following up this question on stackoverflow I stumbled across some weird issues regarding GetType().

1. GetType() cannot be overridden but hidden

While GetType() is not virtual for very good reasons and therefor one cannot override it the following is possible:

class MyClass
{
    public new Type GetType()
    {
         return typeof(string);
    }
}

Not that this is a good idea but it compiles and runs:

var t1 = new MyClass().GetType();
var t2 = ((object)new MyClass()).GetType();
Console.WriteLine("t1 = {0} --- t2 = {1}", t1.Name, t2.Name);

results in the expected output:

t1 = String --- t2 = MyDisposable

Now if it so important that GetType() does not violate its contract then why hasn’t there been a rule added to the specification saying that you are not allowed to new GetType(). You could argue that GetType() is just a normal method like any other – however it isn’t really. There is a lot of code relying on the fact that it does what it does and is not changed at whim, except it’s still possible to break it under certain circumstances – why not prevent it alltogether? Another argument I guess is that it would assign some special meaning for the compiler to an implemented method on the framework which certainly is not a good idea, right? Well, there are at least two exceptions out there already. One is IDisposable where an interface has a special language construct (using in C#) which relies on it. The other one is Nullable which is the only value type you can assign null to. I admit that one should be careful in what exceptions to the rule are choosen however in the case of GetType() it might have been worth it. Now the latter of the two mentioned exceptions leads me to the second weirdness.

Nullable is only sometimes null

Coming from the linked question at the top, it is apparent that the following is a bit inconsistent:

int? i = null;
Console.WriteLine(i.GetHashCode()); // works
Console.WriteLine(i.ToString()); // works
Console.WriteLine(i.HasValue); // works
Console.WriteLine(i.GetType()); // NullReferenceException

The reason being that GetType() is not virtual and is not overridden and therefor i gets boxed into object resulting in null. So a Nullable set to null does not behave like a reference type set to null when it comes to calling methods on it – except for GetType(). Why that? We have already determined that you can hide GetType() so Nullable could have done just that to and avoided the null reference problem.

Maybe someone can shed some light on why some of the decisions have been as they stand.