Live resizing of an ext4 filesytem on linux

Recently I was working on a Linux VM which was running out of disk space and I wanted to increase the available space. I didn’t want to just add another drive and mount is separately but to to increase the size of the root partition.

Disclaimer: The following instructions can easily screw your data if you make a mistake. I was doing this on a VM which I backed up before performing the following actions. If you loose your data because you didn’t backup don’t come and complain.

The VM I was working on is a stock Ubuntu 12.10 Desktop install.

First: Increase the disk size.

In ESXi this is simple, just increase the size of the virtual disk. Now you have a bigger hard drive but you still need to a) increase the partition size and b) resize the filesystem.

Second: Increase the partition size.

You can use fdisk to change your partition table while running. The stock Ubuntu install has created 3 partitions: one primary (sda1), one extended (sda2) with a single logical partition (sda5) in it. The extended partition is simply used for swap, so I could easily move it without losing any data.

  1. Delete the primary partition
  2. Delete the extended partition
  3. Create a new primary partition starting at the same sector as the original one just with a bigger size (leave some for swap)
  4. Create a new extended partition with a logical partition in it to hold the swap space
me@ubuntu:~$ sudo fdisk /dev/sda

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   192940031    96468992   83  Linux
/dev/sda2       192942078   209713151     8385537    5  Extended
/dev/sda5       192942080   209713151     8385536   82  Linux swap / Solaris

Command (m for help): d
Partition number (1-5): 1

Command (m for help): d
Partition number (1-5): 2

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
Using default value 1
First sector (2048-524287999, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-524287999, default 524287999): 507516925

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): e
Partition number (1-4, default 2): 2
First sector (507516926-524287999, default 507516926):
Using default value 507516926
Last sector, +sectors or +size{K,M,G} (507516926-524287999, default 524287999):
Using default value 524287999

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended

Command (m for help): n
Partition type:
   p   primary (1 primary, 1 extended, 2 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (507518974-524287999, default 507518974):
Using default value 507518974
Last sector, +sectors or +size{K,M,G} (507518974-524287999, default 524287999):
Using default value 524287999

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended
/dev/sda5       507518974   524287999     8384513   83  Linux

Command (m for help): t
Partition number (1-5): 5

Hex code (type L to list codes): 82
Changed system type of partition 5 to 82 (Linux swap / Solaris)

Command (m for help): p

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e49fa

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   507516925   253757439   83  Linux
/dev/sda2       507516926   524287999     8385537    5  Extended
/dev/sda5       507518974   524287999     8384513   82  Linux swap / Solaris

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

me@ubuntu:~$ sudo reboot

I noticed afterwards that I didn't set the bootable flag but apparently you don't really need it.

Third: Enlargen the filesystem.

You can do this with resize2fs online on a mounted partition.

me@ubuntu:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        91G   86G   12M 100% /
udev            3.9G  4.0K  3.9G   1% /dev
tmpfs           1.6G  696K  1.6G   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            3.9G  144K  3.9G   1% /run/shm
none            100M   16K  100M   1% /run/user

me@ubuntu:~$ sudo resize2fs /dev/sda1
resize2fs 1.42.5 (29-Jul-2012)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 6, new_desc_blocks = 16
The filesystem on /dev/sda1 is now 63439359 blocks long.

me@ubuntu:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       239G   86G  142G  38% /
udev            3.9G   12K  3.9G   1% /dev
tmpfs           1.6G  696K  1.6G   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            3.9G  152K  3.9G   1% /run/shm
none            100M   36K  100M   1% /run/user

Slight catch: After rebooting the swap space wasn't active. Turned out you need to run mkswap, adjust /etc/fstab to the new UUID and turn the swap on

me@ubuntu:~$ sudo mkswap /dev/sda5
Setting up swapspace version 1, size = 8384508 KiB
no label, UUID=141d401a-b49d-4a96-9b85-c130cb0de40a
me@ubuntu:~$ sudo swapon --all --verbose
swapon on /dev/sda5
swapon: /dev/sda5: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/sda5: pagesize=4096, swapsize=8585740288, devsize=8585741312

Edit /etc/fstab to replace the UUID for the old swap partition with the new one from mkswap.

Reviewboard on Linux and Windows Domain

At work we recently started to use ReviewBoard as code review tool. I installed it on a Ubuntu 12.04 VM as the Windows support is riddled with problems (RB has abandoned official Windows support – so it might work or it might not). Following the instructions for installing it on Linux with MySQL as database backend and using Apache as host was easy and worked pretty much out of the box. Our central repository is hosted in subversion.

Our network is controlled by a Windows Domain Controller and we wanted the ability to authenticate the ReviewBoard users via the domain login. In the following I will assume that the domain is called COMPANY.LOCAL

    • I pretty much followed these instructions except I only installed likewise-open5, likewise-open5-gui but not not winbind (which gave a weird PAM error when I tried to install it)
    • When trying to join the domain as per the above linked page I got an error which led me to this bug report on launchpad. Following the instructions to change /etc/nsswitch.conf to look like this resolved the problem:
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd: compat lsass

group: compat lsass
shadow: compat

# 04102010 Add line as per Likewise Open Admin Guide
hosts: files dns

# 04122010 Commenting out hosts below as per ubuntu bug 555525
#hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
networks: files

protocols: db files
services: db files
ethers: db files
rpc: db files

netgroup: nis
  • Reboot and then join the domain sudo -s domainjoin-cli --loglevel info --log . join COMPANY.LOCAL USER
    • Note that USER needs to be a domain user with sufficient rights to add a computer to the domain
  • For activating the support in ReviewBoard I installed python_ldap and pydns and then configured the Authentication Method in the ReviewBoard admin section to Active Directory authentication method using the following settings:
Domain: company.local
Domain controller: IpOfYourWindowsDomainController
OU: none
Group name: software
Custom search root: none
Recursion depth: -1

And that was it - now every user part of the software domain group can authenticate with their domain login.

Thoughts on RAID and NAS – Part 1

I’m currently looking into building my own NAS: basically a standard PC with a whole bunch of disks running Ubuntu or some other Linux distribution. The first things which comes to mind: “Of course I’m going to run RAID 5 on there. A lot of main boards these days support it out of the box and I get redundancy.”. Well, so I went on to start looking for hardware.

I like to keep things separate so my idea was to have a system drive and decided to try an SSD for it. A 60GB SSD from OCZ is about NZ$100 which is big enough as system drive. Also I know the mantra that “RAID is no backup” so I though I’d better put another separate disk in where I could mirror some of the more critical data on. Not an ideal backup solution (the backup medium resides in the same environment, connected to the same controller on the same mainboard and the same PSU) but oh well – can’t have everything can we.

Ok, now we have 2 disks in the system already, let’s see how many data disks we can fit in. This is apparently constrained by the case (mounting slots), the mainboard (number of SATA connectors) and the PSU (number of power connectors). With the Coolermaster Elite 371 I found a nice case for about NZ$140 which offers six 3.5″ bays and three 5.25″. Assuming that I’ll fit in a DVD drive or something similar this leaves us with up to 8 slots where HDDs can be mounted.

Then let’s go on to the mainboard. I had a look at various Intel and AMD CPU/mainboard combinations and the Asus M5A97 Evo plus an Athlon II X2 270 seemed a nice combination. The Asus offers 6x 6Gb/s SATA ports plus integrated RAID 5 and the Athlon should be up to the task required by the NAS box. For cheaper Intel CPUs which are still slightly ahead of the AMD the mainboards tend to offer less features so the AMD package in total seemed the best. That’s about NZ$270 for board + CPU.

Sweet, so this leaves us with 4 spare ports on the board for data disks. Now, 4 disks at 2TB each gives you approx. 6TB available capacity in a RAID 5 which is what I was aiming for. All sorted then. As data disks I opted for the Western Digital Green Power 2TB model which are about NZ$120 each.

Together with 4GB RAM, some case fans, CPU cooler, some decent wireless gear, a cold spare HDD and 5.25″ -> 3.5″ mounting brackets the total price of the system clocked in at just under NZ$1900 – not bad. While an of-the-shelf 4 bay NAS would have been about NZ$400-500 cheaper this solution give me quite a bit more flexibility.

All sorted then – right? Hmm, not quite. A colleague at work mentioned the bad words “Unrecoverable Read Error” (short URE) to me and I thought “Well, better check what’s that all about”. Now, as it turns out this means that every approximate 12TB of data you read of a disk an “Unrecoverable Read Error” will be reported – in other words “a bad sector”. This will cause the disk to get dropped from the RAID which then needs to be rebuilt after the bad sector has been mapped out. Does not sound so bad – right? Well, what happens when you actually have a full disk failure (lets say a head crash) and you replace the drive and then the array gets rebuilt. Now imagine your get a URE during the rebuild – not so nice. It will very likely end up in some data corruption. So I decided to ask the big gooracle and came across this article on ZDNet which gave me some things to think about (and led me to write this post).

The author makes one implicit assumption that based on a 7 disk RAID 5 array with 2TB per disk in case of a disk failure you will have to read approx. 12TB of data from the other disks and thus encounter a URE with a probability close to 1 (based on an average 12TB URE rate). I think this is invalid because the URE is per disk. And you still only need to read 2TB of each disk. Hmm, lets see if we can come with up some calculations here.

Let’s define a set of events called URE[x] which means “a URE is encountered after x TB have been read from a single disk”. Then we define the following probabilities:
P(URE[x]) = x/12 for 0 <= x <= 12
P(URE[x]) = 0 for x <= 0 (nothing read yet, extremely unlikely that we get a URE)
P(URE[x]) = 1 for x >= 12 (probability of encountering a URE after 12TB or more have been read)

This assumes that the probability for getting a URE is linear in the amount of data read which is probably not the case but make some calculations easier.
Let further be:
n – total number of disks on the array
c – capacity per disk in TB
d – total amount of data read from the array at the point of rebuilt
FAIL – the event that we get a URE while we are trying to rebuild an array which had a total disk failure

P(FAIL) is apparently the probability that at least one of the remaining (n – 1) disks has a URE while rebuilding. This is equal to one minus the probability that no drive will have a failure. The event that a single drive will have a URE at that point is URE[d/n + c] (assuming the read data is equally distributed across all disks). Therefore P(URE[d/n + c]) = ((d/n) + c) / 12 and the probability that it won’t fail is P(!URE[d/n + c]) = 1 – ((d/n) + c) / 12. Assuming that those events are independent the probability that out of (n – 1) drives none will have a URE is then: P(!URE[d/n+c])^(n-1) which means P(FAIL) = 1 – P(!URE[d/n+c])^(n-1) = 1 – (1 – ((d / n) + c) / 12)^(n-1)

Looks a bit dry, so let’s run it with some numbers. The ZDNet article stated that approximately 3% of all drives fail in the first 3 years. Let’s make some assumptions:

I plan to have 4 2TB disks in the array, prime it with about 3TB of data and then cause maybe 5GB/day of read/write traffic for the array. For simplicities sake we assume that writes affect the URE same way as reads. So that leaves us with:
n = 4
c = 2 (TB)
d = 3 (TB) + 3 * 365 * 5 / 1000 = 8.475 (TB)
Therefore P(FAIL) = 1 – (1 – (d/n + c) / 12)^(n-1) = 1 – (1 – ((2.12 + 2) / 12))^3 = 72.9%

So, if I have a drive failure after 3 years with the above mentioned setup and usage the probability of encountering a URE during the rebuild is approximately 75%. I have made a little spreadsheet to calculate the probabilities based on the main parameters: RAID 5 Probability Calculations. Playing around with the numbers shows: Increasing the number of disks (like using 7 1.5TB disks) doesn’t help. Although P(URE[x]) decreases per disk (as the load is spread) overall P(FAIL) increases due to the larger number of disks.

Only when you start going to enterprise drives with a URE of about 120TB you start dropping down to 10% probability of a failure during a rebuild. However a 600GB enterprise SAS drive currently costs about NZ$350 and you would need 14 of those to make your 8 TB array.

Lets define an event CRASH which means “A drive has a major crash and is gone for good”. Assuming that CRASH is independent for all disks in an array (which it is not but again let’s make that assumption for simplicity’s sake) then the probability that at least one drive in the array fails is 1 minus the probability that no drive fails which is 1 – P(!CRASH)^n (with n being the number of disks in the array). Assuming P(CRASH) = 0.03 then P(!CRASH) = 0.97 and for a 4 disk array 1 – 0.97^4 = 11.5%. Again assuming that CRASH and FAIL are independent the probability of having a CRASH and a FAIL is P(CRASH) * P(FAIL) = 8%. So with the above setup there is an 8% chance to have some kind of data loss during the first 3 years.

Does that mean RAID 5 is useless? Well – not quite. Just because you have a URE during a rebuild doesn’t mean that all your data is gone. However it is very likely that some of you data is now corrupted but that might be only 1 file instead of everything. It depends on your controller and OS how much pain it will be to recover from that and get your array rebuild. I think it’s potentially more trouble than it’s worth it so I’ll be looking into other alternatives to see what the odds are there.

GetType() weirdness in .NET

Following up this question on stackoverflow I stumbled across some weird issues regarding GetType().

1. GetType() cannot be overridden but hidden

While GetType() is not virtual for very good reasons and therefor one cannot override it the following is possible:

class MyClass
{
    public new Type GetType()
    {
         return typeof(string);
    }
}

Not that this is a good idea but it compiles and runs:

var t1 = new MyClass().GetType();
var t2 = ((object)new MyClass()).GetType();
Console.WriteLine("t1 = {0} --- t2 = {1}", t1.Name, t2.Name);

results in the expected output:

t1 = String --- t2 = MyDisposable

Now if it so important that GetType() does not violate its contract then why hasn't there been a rule added to the specification saying that you are not allowed to new GetType(). You could argue that GetType() is just a normal method like any other - however it isn't really. There is a lot of code relying on the fact that it does what it does and is not changed at whim, except it's still possible to break it under certain circumstances - why not prevent it alltogether? Another argument I guess is that it would assign some special meaning for the compiler to an implemented method on the framework which certainly is not a good idea, right? Well, there are at least two exceptions out there already. One is IDisposable where an interface has a special language construct (using in C#) which relies on it. The other one is Nullable which is the only value type you can assign null to. I admit that one should be careful in what exceptions to the rule are choosen however in the case of GetType() it might have been worth it. Now the latter of the two mentioned exceptions leads me to the second weirdness.

Nullable is only sometimes null

Coming from the linked question at the top, it is apparent that the following is a bit inconsistent:

int? i = null;
Console.WriteLine(i.GetHashCode()); // works
Console.WriteLine(i.ToString()); // works
Console.WriteLine(i.HasValue); // works
Console.WriteLine(i.GetType()); // NullReferenceException

The reason being that GetType() is not virtual and is not overridden and therefor i gets boxed into object resulting in null. So a Nullable set to null does not behave like a reference type set to null when it comes to calling methods on it - except for GetType(). Why that? We have already determined that you can hide GetType() so Nullable could have done just that to and avoided the null reference problem.

Maybe someone can shed some light on why some of the decisions have been as they stand.

ListView with dynamic columns

Quite a while a ago I needed a ListView with columns to be determined at runtime. Using my favourite seach engine I came across an article on codeproject offering a simple enough solution. I also found a thread on stackoverflow to which I posted some remarks. Now someone asked for the whole thing – so I thought it might be a good idea to write it down.

The first thing was that changed the DataMatrix class so that rows are dictionaries mapping column names to cell object instead of relying on the order. I'm also not quite sure what the purpose of the GenericEnumerator class is - why not use the one which the collection already implements? So I got rid of that as well. Last step was to extract an interface so I didn't have to tie the ListView to a concrete implementation

This is the result:

public interface IDataMatrix : IEnumerable
{
    List<MatrixColumn> Columns { get; set; }
}

public class DataMatrix : IDataMatrix
{
    public List<MatrixColumn> Columns { get; set; }
    public Dictionary<string, Dictionary<string, object>> Rows { get; set; }

    public DataMatrix()
    {
        Columns = new List<MatrixColumn>();
        Rows = new Dictionary<string, Dictionary<string, object>>();
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
       return Rows.Values.GetEnumerator();
    }
}

public class MatrixColumn
{
    public string Name { get; set; }
}

In the codeproject article Tawani uses attached properties to add the binding functionality which is quite nice as it makes it a bit more independent. However we already have an ExtendedListView class which incorporates some other changes so I decided to integrate the matrix binding as well. The main changes are that the ColumnHeaderTemplate is copied so you can style the headers and that the display binding is to the column name instead of the index.

    public class ExtendedListView : ListView
    {
        static ExtendedListView()
        {
            ViewProperty.OverrideMetadata(typeof(ExtendedListView), new PropertyMetadata(new PropertyChangedCallback(OnViewPropertyChanged)));
        }

        private static void OnViewPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            UpdateGridView(d as ExtendedListView, (IDataMatrix)d.GetValue(MatrixSourceProperty));
        }

        public static readonly DependencyProperty MatrixSourceProperty =
            DependencyProperty.Register("MatrixSource",
                                                typeof(IDataMatrix), typeof(ExtendedListView),
                                                new FrameworkPropertyMetadata(null,
                                                                              new PropertyChangedCallback(
                                                                                  OnMatrixSourceChanged)));

        public IDataMatrix MatrixSource
        {
            get { return (IDataMatrix)GetValue(MatrixSourceProperty); }
            set { SetValue(MatrixSourceProperty, value); }
        }

        private static void OnMatrixSourceChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            var listView = d as ExtendedListView;
            var dataMatrix = e.NewValue as IDataMatrix;

            UpdateGridView(listView, dataMatrix);
        }

        private static void UpdateGridView(ExtendedListView listView, IDataMatrix dataMatrix)
        {
            if (listView == null || listView.View == null || !(listView.View is GridView) || dataMatrix == null)
                return;

            listView.ItemsSource = dataMatrix;
            var gridView = listView.View as GridView;
            gridView.Columns.Clear();
            foreach (var col in dataMatrix.Columns)
            {
                var column = new GridViewColumn
                {
                    Header = col.Name,
                    HeaderTemplate = gridView.ColumnHeaderTemplate,
                    DisplayMemberBinding = new Binding(string.Format("[{0}]", col.Name))
                };
                gridView.Columns.Add(column);
            }
        }
    }

Almost done - you can bind to a DataMatrix and the columns will be automatically generated. The next thing to do was to add customizable cell templates. That proofed a little bit tricky because somehow the data context of the cell always ended up being the whole matrix instead of an individual element. After searching a while for a solution on the web and finding nothing I decided to cheat a little bit and had a look at what's happening under the hood with reflector. That basically showed that the ContentPresenter is setting the DataContext of the template to it's own content (which is the matrix). So I added a wrapper to set the content of the presenter to the actual object instead of the whole matrix and then letting the presenter do its magic to pass it on to the template. It's a bit ugly and relies on an undocumented behaviour so it might break in the future but so far (up to .Net 4.0) it still works.

        public class DataMatrixCellTemplateSelectorWrapper : DataTemplateSelector
        {
            private readonly DataTemplateSelector _ActualSelector;
            private readonly string _ColumnName;
            private Dictionary _OriginalRow;

            public DataMatrixCellTemplateSelectorWrapper(DataTemplateSelector actualSelector, string columnName)
            {
                _ActualSelector = actualSelector;
                _ColumnName = columnName;
            }

            public override DataTemplate SelectTemplate(object item, DependencyObject container)
            {
                // remember old data context
                if (item is Dictionary)
                {
                    _OriginalRow = item as Dictionary;
                }

                if (_OriginalRow == null)
                    return null;

                // get the actual cell object
                var obj = _OriginalRow[_ColumnName];

                // select the template based on the cell object
                var template = _ActualSelector.SelectTemplate(obj, container);

                // find the presenter and change the content to the cell object so that it will become
                // the data context of the template
                var presenter = Utils.GetFirstParentForChild(container);
                if (presenter != null)
                {
                    presenter.Content = obj;
                }

                return template;
            }
        }

The only bit missing is to add a CellTemplateSelector to the list view and we are done.

 public class ExtendedListView : ListView
    {
        static ExtendedListView()
        {
            ViewProperty.OverrideMetadata(typeof(ExtendedListView), new PropertyMetadata(new PropertyChangedCallback(OnViewPropertyChanged)));
        }

        private static void OnViewPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            UpdateGridView(d as ExtendedListView, (IDataMatrix)d.GetValue(MatrixSourceProperty));
        }

        public static readonly DependencyProperty MatrixSourceProperty =
            DependencyProperty.Register("MatrixSource",
                                                typeof(IDataMatrix), typeof(ExtendedListView),
                                                new FrameworkPropertyMetadata(null,
                                                                              new PropertyChangedCallback(
                                                                                  OnMatrixSourceChanged)));

        public IDataMatrix MatrixSource
        {
            get { return (IDataMatrix)GetValue(MatrixSourceProperty); }
            set { SetValue(MatrixSourceProperty, value); }
        }

        private static void OnMatrixSourceChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            var listView = d as ExtendedListView;
            var dataMatrix = e.NewValue as IDataMatrix;

            UpdateGridView(listView, dataMatrix);
        }

        public static readonly DependencyProperty CellTemplateSelectorProperty =
           DependencyProperty.Register("CellTemplateSelector",
                                               typeof(DataTemplateSelector), typeof(ExtendedListView),
                                               new FrameworkPropertyMetadata(null,
                                                                             new PropertyChangedCallback(
                                                                                 OnCellTemplateSelectorChanged)));

        public DataTemplateSelector CellTemplateSelector
        {
            get { return (DataTemplateSelector)GetValue(CellTemplateSelectorProperty); }
            set { SetValue(CellTemplateSelectorProperty, value); }
        }

        private static void OnCellTemplateSelectorChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            var listView = d as ExtendedListView;
            if (listView != null)
            {
                UpdateGridView(listView, listView.MatrixSource);
            }
        }

        private static void UpdateGridView(ExtendedListView listView, IDataMatrix dataMatrix)
        {
            if (listView == null || listView.View == null || !(listView.View is GridView) || dataMatrix == null)
                return;

            listView.ItemsSource = dataMatrix;
            var gridView = listView.View as GridView;
            gridView.Columns.Clear();
            foreach (var col in dataMatrix.Columns)
            {
                var column = new GridViewColumn
                {
                    Header = col.Name,
                    HeaderTemplate = gridView.ColumnHeaderTemplate
                };
                if (listView.CellTemplateSelector != null)
                {
                    column.CellTemplateSelector = new DataMatrixCellTemplateSelectorWrapper(listView.CellTemplateSelector, col.Name);
                }
                else
                {
                    column.DisplayMemberBinding = new Binding(string.Format("[{0}]", col.Name));
                }
                gridView.Columns.Add(column);
            }
        }
    }

The code is by no means perfect - there are always things which could be improved:

  • Encapsulate the DataMatrix better and give it a nicer interface
  • Instead of having to use a CellTemplateSelector it would be nice if the templates could be selected by DataType
  • Make the matrix columns observable and react to dynamic changes
  • Have a bit mor intelligent update mechanism than to rebind the whole matrix

You can download the whole solution here: DataGridListView.zip

Value types and null in C#

Recently I came across one of the edge cases in C#. I needed a method to combine a list of items to a string. So I wrote an extension method for that:

public static string StringJoin<T>(this IEnumerable<T> list, string separator, Func<T, string> converter)
{
    return string.Join(separator, list.Select(converter).ToArray());
}

Pretty simple: takes a separator and a delegate to convert objects of type T into string. Next thing was to add a convenience method using ToString as default converter.

public static string StringJoin<T>(this IEnumerable<T> list, string separator)
{
    return list.StringJoin(separator, x => x != null ? x.ToString() : null);
}

The null check is to make sure I don't get unexpected null reference exceptions. However ReSharper gave me a warning about a possible comparison of value type with null. Furthermore it offered me the suggestion to replace it with default(T) and indeed when letting it do its magic it would convert it to

public static string StringJoin<T>(this IEnumerable<T> list, string separator)
{
    return list.StringJoin(separator, x => x != default(T)? x.ToString() : null);
}

which is wrong - it won't compile.

So I got curious why the ReSharper guys went through the motions to not only warn about it (well - it's a compiler warning, so I guess it's fair enough) but to add a suggestion which leads to uncompilable code. I asked my favourite search engine about it and I came across a few articles about that "problem" all basically stating that's how it is. And someone came up with a solution here: http://devnet.jetbrains.net/thread/293148?tstart=0

After writing some benchmark it turned out that in release mode the workaround was about 5 times slower than the simple comparison.

What do we learn from it this?

  1. Don't optimize prematurely.
  2. Don't try to fix what is not broken.
  3. Don't try to outsmart the compiler - it might cost you in optimization potential.