Reseting NTFS ownership and attributes after a Windows reinstallation

Let's say you had to reinstal Windows 7, due to Microsoft having screwed so bad with its automatic update installer that it was the only option left. Now, you performed a semi-clean install, in that Windows installed a brand new copy, but moved the previous installation system directory into C:\Windows.old.

The usual problem, if you're using mutliple NTFS drives or partitions is that you may have files on these additional partitions that are owned by your previous account, which now has a completely different GUID than your new account. This means that you find that you have all the trouble in the world getting full access to files you rignfully own.

The solution?

In an elevated prompt, go to the additional drive and issue:
takeown /F * /R
icacls * /grant <your_user_name>:F /T

This will take a while, but it should reset ownerships and all these other pesky attributes that are a major annoyance to GETTING ANY WORK DONE!

Note that you can also try the following beforehand, if you want to reset all the access rights:
icacls * /T /Q /C /RESET

Securely erasing a drive in Linux

Now ain't that useful. From time to time you have to part with an old disk, but of course, you're rather make sure it is properly erased of all its data before handing it off.

Well, what do you know, since 2001, nearly every HDD under the sun comes with a Secure Erase feature, as it is part of the ATA standard.

The even better news is that hdparm fully supports it (is there anything hdparm can't do?), thus, if you're on Linux and you need to securely erase all the data from a drive, all you need to do, say, if your disk is /dev/sdb, is:
# hdparm --user-master u --security-set-pass p /dev/sdb

 Issuing SECURITY_SET_PASS command, password="p", user=user, mode=high

# hdparm --user-master u --security-erase p /dev/sdb

 Issuing SECURITY_ERASE command, password="p", user=user
After a while, you should find that your drive has been securely erased. Neat!

VERY IMPORTANT NOTE: If you want to reuse the drive after the secure erase is complete, you MUST issue the following command to remove the lock.
# hdparm --security-disable p /dev/sdb

 Issuing SECURITY_DISABLE command, password="p", user=user
This is because, if you don't disable security, the drive will be kept locked, which will produce ATA/SATA interface errors and prevent any write access!

Note that if you want to find out whether the security erase/enhanced erase feature is supported at all, as well as how long that erasing is going to take, you probably want to issue the following beforehand:
~# hdparm -I /dev/sdb


ATA device, with non-removable media
        Model Number:       SAMSUNG HD322GJ
        Serial Number:      XXXXXXXXXXXXXX
        Firmware Revision:  XXXXXXXX
        Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6
        Master password revision code = 65534
        not     enabled
        not     locked
        not     frozen
        not     expired: security count
                supported: enhanced erase


Setting passwords /etc/shadow

If you ever need to edit /etc/shadow to add an MD5 password manually (yes, this can happen for very legitimate reasons):
# openssl passwd -1 -salt abcd1234
Password: hunter1
Also of reference: here and here


(Re)installing OpenWRT on a WRT54G

Every once in a while, I find that I want to upgrade my WRT54G in a clean fashion. And every once in a while, it's a massive struggle to make it behave as I want to, as I find that the defaults of OpenWRT are very restrictive.

First of all, here's the network configuration I want use OpenWRT in:
  • Internet gateway + firewall
  • WRT54G as a wireless + wired router
  • LAN SOHO server (DHCP + DNS, Samba, etc)
As stated, I want the WRT as acting as a mere router with any wireless or wired connection should have complete and transparent access to the LAN, with no firewalling (the internet gateway does it) and no DHCPing. Should be simple, but unless you've done it before, it's usually a PITA to configure.

OK, so first let's start by installing/resetting our firmware.
At the time of this post the latest version of OpenWRT is Backfire 10.3.1. With a WRT54G the download directory you're interested in is brcm47xx/. Now, with regards the confusing content of the directory and its numerous files, the only one you are interested in is openwrt-brcm47xx-squashfs.trx (the .trx). The .bin files are only there for users of the original Linksys firmware. The admin console should let you go through the upgrade nicely, else you'll find various upgrade/install tutorials in the OpenWRT HOWTOs section.

Now, assuming you have the latest firmware installed, you may want also want to reset the settings to default. There are multiple ways to do just that, as indicated on the OpenWRT failsafe guide. Since I tend to use the serial connection to ensure that I can access the WRT no matter what, my preferred way is just to enter the failsafe mode through the serial console with f+Enter when prompted, and then issue:
reboot -f

We will now assume that the router has been reset to its intial boot parameters. In this configuration, the default address is so you'll probably want to configure a network interface with a static address of and connect it to one of the 4 ethernet port of the router (but not the 5th "internet" port, as this one is firewalled by default and you won't be able to access the console from it).

OK, with the web interface accessible at, we'll do the following:
  1. In Network → Firewall, delete the LAN and WAN firewall zones and set all the defaults in general settings to "accept". Click save and apply.
  2. In Network → Static Routes add a route with the following parameters:
    • Interface: lan
    • Target:
    • IPv4-Netmask:
    • IPv4-Gateway:
    • Click save and apply
  3. In Network → Switch:
    • Delete VLAN #1
    • Mark all ports of VLAN #0 as untagged
    • Click save and apply
  4. In Network → Interfaces:
    • Delete the WAN network
    • Edit the LAN network and in General Setup, make sure the Protocol is set to "Static address" and change it to
    • Add as a gateway and as custom DNS
    • Also make sure to check the "Disable DHCP for this interface" option
    • Save (but don't apply)
  5. In Network → Interfaces → Physical Settings:
    • Add "VLAN Interface: "eth0.1""
    • Make sure "creates bridge" is selected and enable STP if desired
    • Click save and apply.
    After a while, you should be able to reconnect to the router using
From that stage you should have full access to the network and you should be able to configure the other options such as WLAN and additional packages.

You can also fine tune your network config by editing /etc/config/network. Don't forget to issue a
/etc/init.d/network reload when you're done.

Finally, you may want to note that the power supply that Linksys providess with the WRT54G sure is a piece of crap (at least the early ones - I can only hope they have improved on that): even when disconnected and therefore not supplying any power to the router, the PSU consumes 3 Watts (!), or about half of what the device actually uses when active. Talk about wasting watts for nothing...


Help, my RAID array does not complete synchronization!

Let us suppose the following situation: You have a Linux server with a software RAID1 array (md) and, for one reason or another (mostly because your are a lazy admin, admit it!), both disks are reporting unreadable sectors, either through SMART or through actual failed readout attempts.

So you installed a 3rd good disk, set it as a spare, then failed one of the 2 bad ones to initiate synchronisation onto the good new disk. However, all hell breaks lose as you find out your synchronisation doesn't complete (/proc/mdstat reports U_ or _U) and instead of ignoring the unreadable sectors as it should, md decides that it cannot continue.

Worse, if you look at your dmesg, you find out that it is being polluted by a continuous stream of:
RAID1 conf printout:
--- wd:1 rd:2
disk 0, wo:0, o:1, dev:sda1
disk 1, wo:1, o:1, dev:sdb1

OK, first of all, since this information is quite hard to find, especially if you are in a hurry, here are what the abbreviations above mean:
  • wd: working disks
  • rd: raid disks
  • wo: write-only (if set to 1, this usually indicates a problem, and that data duplication doe not occur for this device)
  • o: online
Obviously, wd:1 as well as wo:1 for the second disk is not something we want to see. Why can't our good spare disk be added as R/W to the gorram array? Heck, if the problematic disk fails, that single-handedly contains our up-to-date data now, we will be in big trouble. What's the point of providing redundancy, really, if md fails to synchronize as soon as there's one measly sector it cannot read!

It's a bird! It's a plane! No, it's hdparm!

Well, the sad truth of md on Linux (which may have improved with newer versions) is that it isn't resilient at all when it comes to unreadable sectors during sync. I guess the developers decided that, since the point of redundancy is to always have at least one good set of data, they didn't need to focus on situation where the "good" set of data may also have some corruption, and therefore never planned for anything but try and re-read an unreadable sector forever, until the disk magically repairs itself (right... fat chance!).

Now (and for the rest of this post I will mostly be following the excellent information provided by Bas on his blog) to compensate for that oversight, the trick is to have md read the problematic sectors one way or another, so that the synchronisation can complete. May sound easier said than done but most of the time it shouldn't be an issue, as recent disks with SMART are engineered with a set of spare sectors, to be allocated in replacement of unreadable or unwritable ones for exactly this kind of situation. The issue however is that reallocation of sectors only occurs on write access.

What this means then is that, while the disk has the technology to "fix" itself, as long as you are only attempting to read the problematic sectors, reallocation will not be triggered and you will continue to get read errors. Thus, you must manually issue a write to the problematic sector(s) to trigger the "recovery" mechanism (NB: I'm using "fix" and "recovery" loosely, as you can of course not recover data from these sectors if they are reallocated, therefore will end up with some corrupted data).

This can be confirmed by checking the Offline_Uncorrectable (#198) and Reallocated_Sector_Ct (#5) reports from SMART:
# smartctl -A /dev/sda
smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
  1 Raw_Read_Error_Rate     0x002f   100   100   051    Pre-fail  Always       -       105
  2 Throughput_Performance  0x0026   054   054   000    Old_age   Always       -       2759
  3 Spin_Up_Time            0x0023   084   084   025    Pre-fail  Always       -       4989
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       10
  5 Reallocated_Sector_Ct   0x0033   252   252   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   252   252   051    Old_age   Always       -       0
  8 Seek_Time_Performance   0x0024   252   252   015    Old_age   Offline      -       0
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       11496
 10 Spin_Retry_Count        0x0032   252   252   051    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   252   252   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       10
191 G-Sense_Error_Rate      0x0022   252   252   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0022   252   252   000    Old_age   Always       -       0
194 Temperature_Celsius     0x0002   064   060   000    Old_age   Always       -       32 (Lifetime Min/Max 20/40)
195 Hardware_ECC_Recovered  0x003a   100   100   000    Old_age   Always       -       0
196 Reallocated_Event_Count 0x0032   252   252   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   252   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   252   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0036   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x002a   100   100   000    Old_age   Always       -       2
223 Load_Retry_Count        0x0032   252   252   000    Old_age   Always       -       0
225 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       10
If you see a zero at the end of these attributes but the disk still reports that it has trouble reading sectors, it indicates that the sector reallocation process hasn't kicked in yet, and needs to be triggered manually.

The first order of the day then is to find the address of the sector(s) we should trigger a write to. This is fairly easy, as all you need to do is run a SMART test, with something like smartctl -t long /dev/sda and write down the first sector address where a read error is reported:
# smartctl -a /dev/sda
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       60%     10864         293039329
Once we have that address, we could of course use dd, but an even simpler approach is to use a recent version of hdparm, as it adds easy support for reading/writing a single sector.

First thing to try with hdparm then, is confirm that we have a problem accessing that sector:
# hdparm --read-sector 293039329 /dev/sda

/dev/sda: Input/Output error
This confirms what the SMART test reported. You can try a few more read attempts, to validate that the sector is busted, and then, you can issue a write so that the disk finally realizes it should reallocate that sector. Note that, because the operation obviously means destroying existing data, hdparm requires you to add a --yes-i-know-what-i-am-doing flag to issue the write, hence:
# hdparm --yes-i-know-what-i-am-doing --write-sector 293039329 /dev/sda

/dev/sda: re-writing sector 293039329: succeeded
You can then issue a read again, which will confirm that the sector has been reallocated:
# hdparm --read-sector 293039329 /dev/sda

reading sector 293039329: succeeded
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
If you issue smartctl -A again, you should also see that the sector has been reallocated:
198 Offline_Uncorrectable   0x0030   100   100   000    Old_age   Offline      -        1
It's usually a good idea to use hdparm to read adjacent sectors as well, and correct them as needed, then repeat the operations above until the SMART self test completes without error and you have smoked out all the problematic sectors. At this stage, if you issue a resync of the array with the new disk, it should complete successfully and redundancy will be restored. Time to order another replacement and check your data for corruption. But at least, you are redundant again.

  • To get details of your md array, you can use mdadm --detail. Eg.
    # mdadm --detail /dev/md2
            Version : 0.90
      Creation Time : Tue May  6 18:43:16 2008
         Raid Level : raid1
         Array Size : 130030016 (124.01 GiB 133.15 GB)
      Used Dev Size : 130030016 (124.01 GiB 133.15 GB)
       Raid Devices : 2
      Total Devices : 3
    Preferred Minor : 2
        Persistence : Superblock is persistent
        Update Time : Tue Jan 10 13:42:29 2012
              State : clean
     Active Devices : 2
    Working Devices : 3
     Failed Devices : 0
      Spare Devices : 1
               UUID : 0be47c81:ede086ae:0c460403:d81de298
             Events : 0.3658859
        Number   Major   Minor   RaidDevice State
           0       8        3        0      active sync   /dev/sda3
           1       8       19        1      active sync   /dev/sdb3
           2       8       35        -      spare   /dev/sdc3
  • You are strongly encouraged to check your syslog or messages for reports of I/O issues, especially if you want to locate the data that may have been affected.
  • This method is not guaranteed to work! Sometimes a SMART test will report a read error but a readout of the sector using hdparam will work fine, so you won't be able to get the disk to reallocate it. However, tis shouldn't matter too much for md resync which is what we are interested in here.
  • If your disk has a lot of unreadable sectors, it is possible that you may run out of spare sectors for reallocation. It's hard to say how many spare sectors are made available by hard drive manufacturers, but I assume it isn't that many.
  • You may have a problem recompiling a recent version of hdparm on some older Linux systems:
    fallocate.c: In function ‘do_fallocate_syscall’:
    fallocate.c:39: error: ‘__NR_fallocate’ undeclared (first use in this function)
    fallocate.c:39: error: (Each undeclared identifier is reported only once
    fallocate.c:39: error: for each function it appears in.)
    make: *** [fallocate.o] Error 1
    If that is the case, just add:
    #define __NR_fallocate 285
    in fallocate.c
  • Some disks seem to be smart enough (no pun intended) to do further correction, once they have registered Offline_Uncorrectable sectors, so you may actually find out that, after a few hours, the value of Offline_Uncorrectable falls back to zero, and still the sectors can be read or written with extended SMART tests not reporting any issue. Pretty neat, but I still wouldn't entirely trust the disk...


Using LILO to boot disks by UUID

If you're plugging USB drives in an out and using LILO to boot a Linux distro (eg. Slackware) you may have ended up with a kernel panic because your /dev/sd# were shuffled around and the kernel was no longer able to find its root partition on the expected device. Of course, having Linux failing to boot just because you happened to plug an extra drive sucks big time, so we want to fix that.

The well known solution of course it to use UUIDs or labels, since these are fixed. However, while recent versions of LILO are supposed to support root partitions that are identified by UUID/Label, in practice, this doesn't work UNLESS you are using an initrd disk. I'm not sure who of LILO or the kernel is responsible for this new layer of "suck" (I'd assume the kernel, since the expectation is that LILO is using the dev mappings that are being fed by the kernel), but I can only say that there really are some areas of Linux that could still benefit from long awaited improvements...

Thus, to be able to use UUIDs or labels for your root partition in LILO, you must boot using an initrd. Worse, as previously documented, you will most likely need to compile a new kernel that embeds the initrd, lest you want to run into the following issue while running LILO:
Warning: The initial RAM disk is too big to fit between
the kernel and the 15M-16M memory hole.

In practice (as also illustrated by this post), this means you will need to:
  1. Create an initrd cpio image that can be embedded into a kernel with:
    cd /boot
    mkinitrd -c
    cd initrd-tree
    find . | cpio -H newc -o > ../initrd.cpio
  2. Recompile a kernel, while making sure that you have the General Setup → Initial RAM filesystem and RAM disk (initramfd/initrd) support selected, and then set General Setup → Initramfs source file(s) to /boot/initrd.cpio

  3. Edit your /etc/lilo.conf and add an append = "root=UUID=<YOUR-DISK-GUID>" to your Linux boot entry. An example of a working lilo.conf is provided below. Note that you probably also want to use a fixed IDs for boot=, so that running LILO is also not dependent on the current /dev/sd# organization.. 

  4. Run LILO, plug drives around and watch in amazement as your system still boots the Linux partition regardless of how the drives are assigned
Example lilo.conf:
# Start LILO global section
boot = /dev/disk/by-id/ata-ST3320620AS_ABCD1234
# LILO doesn't like same volume IDs of RAID 1
disk = /dev/sdb inaccessible
default = Windows
bitmap = /boot/slack.bmp
bmp-colors = 255,0,255,0,255,0
bmp-table = 60,6,1,16
bmp-timer = 65,27,0,255
# Append any additional kernel parameters:
append=" vt.default_utf8=1"
timeout = 35
# End LILO global section

image = /boot/vmlinuz
  append = "root=UUID=2cc11aaf-f838-4474-9d9a-f3881569f97c"
  label = Linux
image = /boot/vmlinuz.rescue
  append = "root=UUID=2cc11aaf-f838-4474-9d9a-f3881569f97c"
  label = Rescue
other = /dev/sda
  # Windows doesn't go to S3 sleep and has issues with backup,
  # unless it sees its disk as first in BIOS...
  boot-as = 0x80
  label = Windows
other = /dev/disk/by-id/ata-ST3320620AS_ABCD1234-part4
  label = OSX
Oh, and of course, don't forget to edit your /etc/fstab as required, if you still use /dev/sdX# entries there.


Enabling serial console on Linux Slackware

I'm doing this frequently enough to warrant a post.
  1. Make sure you use a kernel with console enabled

  2. Confirm that your serial tty's are detected with dmesg | grep tty:
    [    0.000000] console [tty0] enabled
    [ 1.323949] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
    [ 1.568561] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
    [ 1.592267] 00:0c: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
    [ 1.614293] 00:10: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A

  3. Edit /etc/securetty and uncomment the lines matching the ttyS you got from dmesg. Else you won't be able to logon as root.

  4. Edit your /etc/inittab and uncomment the lines:
    s1:12345:respawn:/sbin/agetty -L ttyS0 9600 vt100
    s2:12345:respawn:/sbin/agetty -L ttyS1 9600 vt100
    While you're at it, you probably want to change the 9600 vt100 to 115200 linux, if you use putty in serial mode to connect for instance.

  5. If you want the boot messages on serial as well (recommended!), make sure you append a console= line to your kernel. For instance, if using LILO and for 115200 bauds, you would add a line:
    append = "console=ttyS0,115200"


Extracting a single DTS/AC3 channel from an MKV file as a PCM WAV

Well, since audio and video software editors are still not there yet, a quick recepy.

Say you have a H264/DTS MKV video file and you want to extract the center channel, as WAV, for easy editing.
  1. Download tsMuxeR and extract the audio stream as a .dts (eg. "multichannel.dts"), using the GUI
  2. Download eac3to and run the following command to convert to a multichannel PCM WAV: eac3to.exe multichannel.dts multichannel.wav
  3. Download wavosaur and open the multichannel WAV. Then kill the channels you don't need, edit the file, etc.
None of the programs above use an installer - they can be extracted and run directly.


Slackware 13.37 and minicom

Doesn't work by default. Wanna know why? /etc/minirc.dfl is missing a Line Feed. Yes, really, all you need to do is add an extra blank like there.


Installing OSX (Snow Leopard) + Linux (Slackware 13.37 x86_64) on a GUID/GPT disk, with Software RAID enabled (ICH8R) - Part 5

Focusing on Linux + LILO setup, to (hopefully) conclude this series.

At this stage, you are supposed to have a standalone GPT/GUID disk, with a bootable OSX (Chameleon, Chimera), as well as free space for a Linux installation.

More disk partitioning

First order of the day is to pick up the latest Slackware distro (13.37 at the time of this post) and fire it up. Earlier versions of Slackware cannot handle GPT disks, and I don't believe they include gdisk either, so make sure you pick the very latest.

Now, since we seeded the partitioning from OSX and left some free space, when firing up gdisk (using sdc here as sda+sdb are used by Windows 7 in ICH8R RAID1) you'll be greeted with the following:
# gdisk /dev/sdc
GPT fdisk (gdisk) version 0.6.14

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sdc: 234441648 sectors, 111.8 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 592CD063-7F0A-4F52-81EC-A58C5D3F859C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 234441614
Partitions will be aligned on 8-sector boundaries
Total free space is 175418693 sectors (83.6 GiB)

Number Start (sector) End (sector) Size Code Name
1 40 409639 200.0 MiB EF00 EFI System Partition
2 409640 59022927 27.9 GiB AF00 Snow Leopard

Command (? for help): n
Partition number (3-128, default 3):
First sector (34-234441614, default = 59022928) or {+-}size{KMGTP}:
Last sector (59022928-234441614, default = 59022928) or {+-}size{KMGTP}: +30G
Current type is 'Linux/Windows data'
Hex code or GUID (L to show codes, Enter = 0700):
Changed type of partition to 'Linux/Windows data'

Command (? for help): p
Disk /dev/sdc: 234441648 sectors, 111.8 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 592CD063-7F0A-4F52-81EC-A58C5D3F859C
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 234441614
Partitions will be aligned on 8-sector boundaries
Total free space is 112504133 sectors (53.6 GiB)

Number Start (sector) End (sector) Size Code Name
1 40 409639 200.0 MiB EF00 EFI System Partition
2 409640 59022927 27.9 GiB AF00 Snow Leopard
3 59022928 121937487 30.0 GiB 0700 Linux/Windows data

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!!

Do you want to proceed, possibly destroying your data? (Y/N): y
OK; writing new GUID partition table (GPT).
The operation has completed successfully.
  • Note 1: You'll notice that there's a 200 MB EFI System Partition. If you want OSX to be happy, even if the OSX bootloader shouldn't need it, better leave it alone.

  • Note 2: You'll have noticed that when firing gdisk, it mentioned something about a "protective MBR". This is because GPT and MBR can coexist on the same disk, and therefore an MBR-only aware utility could potentially destroy data. We'll put the MBR+GPT coexistence to good use later on in this post.

  • Note 3: If you leave OSX's disk utility create the partition for the Linux system and try to change the type in gdisk, the installer might not see your Linux system partition as a valid target. If that occurs, you should delete the non OSX/EFI System partitions in gdisk (d command), and recreate them.

Slackware and LILO setup

The Slackware install is as straightforward as usual, so I'm not going to comment on it. At the end, just install LILO on the MBR of your standalone OSX/Slackware disk (here /dev/sdc) as you'd normally do. Reboot, and you oughta be able to access your newly installed Linux system.

So how about we add OSX boot to our etc/lilo.conf and be done with this whole exercise then? The OSX bootloader (Chameleon, etc.) should be able to take the relay after LILO handoff. And while we're at it, we'll also add Windows boot from /dev/sda.
Off we go then and add the following at the end of /etc/lilo.conf:
other = /dev/sda
label = Windows
other = /dev/sdc2
label = OSX
And now the fun begins...

LILO troubles
# lilo

Reference: disk "/dev/sdb" (8,16) 0810

LILO wants to assign a new Volume ID to this disk drive. However, changing
the Volume ID of a Windows NT, 2000, or XP boot disk is a fatal Windows error.
This caution does not apply to Windows 95 or 98, or to NT data disks.

Is the above disk an NT boot disk? [Y/n]
Hell to the no! What on earth is going on here? Well, a little googling around tells you that LILO is unhappy because our mirrored disks (/dev/sda and /dev/sdb) bear the same volume ID. Two solutions there:
  • Solution 1: add a section:
    disk = /dev/sdb
    to lilo.conf to make it ignore the second mirrored disk altogether. In this case LILO will issue the warning Warning: bypassing VolumeID scan of drive flagged INACCESSIBLE: /dev/sdb but proceed.

  • Solution 2 (better): enable dmraid access to your drive. The thing is that the Slackware installer actually saw our mirrored volume alright (it listed our device automatically as /dev/md126*) but when booting our newly installed Slackware, the dm volume was no longer there. In case you wonder, this is because the Slackware installer rc.S script issues a /sbin/mdadm -A -s after fuse has been launched, which enables autodetection of mirrored drives.
    Therefore, if you just run that same command, the RAID 1 array will be detected and LILO won't complain about Volume IDs.
    # /sbin/mdadm -A -s
    mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
    mdadm: Started /dev/md/WindowsRAID1_0 with 2 devices
    Probably a good idea to add that line in your system's rc.S script as well.

Now, with the Windows RAID issue being sorted, let's try again, shall we:
# lilo
Added Linux *
Added Windows
Warning: Device 0x0820: Inconsistent partition table, 2nd entry
CHS address in PT: 0:0:0 --> LBA (-1)
LBA address in PT: 0 --> CHS (0:0:1)
Fatal: Either FIX-TABLE or IGNORE-TABLE must be specified
If not sure, first try IGNORE-TABLE (-P ignore)
Son of a "£$%^&!! What's going on there?

Booting GPT partitions using MBR tools

I'll cut to the chase, our actual problem here is that LILO (as well as GRUB) is an MBR bootloader, not a GPT one. Therefore it actually uses the MBR partition table, and if you look at this table, you'll see that as far as MBR is concerned, /dev/sdc2 is nowehere to be found:
# fdisk -l /dev/sdc

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x41b82465

Device Boot Start End Blocks Id System
/dev/sdc1 1 234441647 117220823+ ee GPT
Wait a minute, if that's the case, how comes LILO was able to boot our Linux GPT partition then? Isn't that one GPT too?
Aha! Well, the trick here is that, when booting the system, LILO doesn't need to know how the partition or filesystem is setup. Since the kernel file is always available to the Linux system you want to run LILO from (duh!), LILO can point to the actual kernel image sectors directly, so it actually bypasses the whole partition table (as well the filesystem), and just provides a list of raw blocks to read. Haven't you learned something interesting today? And that is why even a non GPT aware bootloader like LILO is able to boot from a kernel residing on a GPT Linux system partition. But of course, as soon as you leave the system partition, then LILO needs an MBR reference, to know where the start of the target partition is located, which it cannot find.

gptsync to the rescue!

At first glance it looks like we may have to abandon LILO. As far as I know, GRUB is unlikely to perform any better though (it's also an MBR based bootloader), but there exist projects targeted at GPT such as eLILO, though these may also require an EFI platform as well, which may also render them useless for our usage scenario.

Now, remember what I said about protective MBR tables for GPT disks and how both MBR and GPT could coexist? How about we created an MBR table that contains the same partition information as our GPT one then? Well, that's exactly what the rEFIt's gptsync tool is all about. This is a tool that can take a GPT partition table and create an exact MBR clone of it, so that MBR aware tools get the relevant information about the partitions.
Of course, this can only work if the GPT partition doesn't deviate too much from MBR compatibility requirements, which in most case (if your HDD is less than 2 TB as this is the hard limit for maximum MBR), shouldn't be a problem.
At this stage you have two options: pick up the binary gptsync executable for OSX from the Mac disk image or recompile your own from source in Linux. If you want a precompiled gptsync for OSX you can find one here (direct mirror), but considering that we're in Linux right now, we'll go the recompilation route. After having downloaded and extracted the latest refit package, just navigate to the gptsync directory and run make -f Makefile.unix. As these utilities are fairly simple, this should produce a gptsync and showpart executable in the same directory. Might be a good idea to see what showpart reports before going further, right?
# ./showpart /dev/sdc

Current GPT partition table:
Warning: Unknown GPT spec revision 0x00010000
Floating point exception
Dammit! Does no single tool work as expected out of the box these days, or is it just that I'm always one step ahead in trying cutting edge stuff? Long story short, you get this error on 64 bit Linux environments because the tools use unsigned long for 32 bit unsigned, and unsigned long is not 32 bit on x86_64. Easiest way I found to fix this is to include <stdint.h> in gptsync.h and then change the UINT8, UINT16, UINT32, UINT64 typedefs to use uint8_t, uint16_t, etc.... Once this is done:
# ./showpart /dev/sdc

Current GPT partition table:
# Start LBA End LBA Type
1 40 409639 EFI System (FAT)
2 409640 59022927 Mac OS X HFS+
3 59022928 121937487 Basic Data

Current MBR partition table:
# A Start LBA End LBA Type
1 1 234441647 ee EFI Protective

MBR contents:
Boot Code: LILO

Partition at LBA 40:
Boot Code: None (Non-system disk message)
File System: FAT32
Listed in GPT as partition 1, type EFI System (FAT)

Partition at LBA 409640:
Boot Code: None
File System: Unknown
Listed in GPT as partition 2, type Mac OS X HFS+

Partition at LBA 59022928:
Boot Code: None
File System: ext4
Listed in GPT as partition 3, type Basic Data
Much better! Off we go then:
# ./gptsync /dev/sdc

Current GPT partition table:
# Start LBA End LBA Type
1 40 409639 EFI System (FAT)
2 409640 59022927 Mac OS X HFS+
3 59022928 121937487 Basic Data

Current MBR partition table:
# A Start LBA End LBA Type
1 1 234441647 ee EFI Protective

Status: MBR table must be updated.

Proposed new MBR partition table:
# A Start LBA End LBA Type
1 1 409639 ee EFI Protective
2 409640 59022927 af Mac OS X HFS+
3 * 59022928 121937487 83 Linux

May I update the MBR as printed above? [y/N] y

Writing new MBR...
MBR updated successfully!

Final LILO tuning

Alrighty. Time for a new run at LILO:
# lilo
Added Linux *
Added Windows
Warning: Device 0x0820: Inconsistent partition table, 2nd entry
CHS address in PT: 1023:254:63 --> LBA (16450559)
LBA address in PT: 409640 --> CHS (25:127:15)
Fatal: Either FIX-TABLE or IGNORE-TABLE must be specified
If not sure, first try IGNORE-TABLE (-P ignore)
Well, actually this is not entirely unexpected considering that GPT does away with the MBR requirements of having partitions starting and ending on a cylinder boundary, so of course, if you don't fine tune your GPT partition, you're likely to have an MBR partition table that doesn't meet the MBR specs. However, in this case, we can safely ignore that issue, by making sure we have the following at the top of our /etc/lilo.conf
lba32        # LBA addressing should be default anyway
ignore-table # ignore CHS vs LBA conflicts in the MBR
This time:
# lilo
Added Linux *
Added Windows
Warning: Device 0x0820: Inconsistent partition table, 2nd entry
CHS address in PT: 1023:254:63 --> LBA (16450559)
LBA address in PT: 409640 --> CHS (25:127:15)
Warning: The partition table is *NOT* being adjusted.
Added OSX
2 warnings were issued.
Yay, at long last, our bootloader installed!
NB: if you get an "invalid block" error on your OSX partition in LILO, this means that your OSX bootloader (Chameleon, Chimera) has not been properly installed, so you will need to fix that first.

With LILO in place, you should find that you now have multiboot between Linux, Windows and OSX, and you didn't have to forfeit your ICH8R Windows partition in the slightest.

As is obvious from the length of these posts, this whole process took some fairly heavy lifting, but I hope that was worth it!

Installing OSX (Snow Leopard) + Linux (Slackware 13.37 x86_64) on a GUID/GPT disk, with Software RAID enabled (ICH8R) - Part 4

Tuning the OSX installation

Just a placeholder for now, as I still need for figure a few things. This will encompass removing the need to update AHCI for RAID everytime the system performs an update, making OSX bootable on its own, installing additiona drivers, etc.

All of the above doesn't really need to be performed before the Linux installation, but it looks tidier to keep all the OSX related posts in the same breadth... Stay tuned.


Installing OSX (Snow Leopard) + Linux (Slackware 13.37 x86_64) on a GUID/GPT disk, with Software RAID enabled (ICH8R) - Part 3

Alright, on to the actual OSX installation. This assumes that you have sorted out the matter of getting your SATA controllers detetected, and have booted your USB installation media with iBoot

Disk partitioning

From there on, this is the regular OSX installation process. After selecting your language and graciously ignoring the EULA, you should be greeted by a list of all the disks that the installation process is able to handle.
Unless you did something wrong in the previous process, you should have all your disks (though some of those may not appear until you run the disk utility - see next step). If you can't see them here at all, then you either screwed up your kext modification or your kext cache rebuilding. Shame on you!

In our case, we're going to use a new drive for the OSX + Linux installation, so the first order of business is to partition our HDD in GPT/GUID mode, using the OSX disk utility, which we prefer over using Linux's gdisk as OSX seems to have some installation quirks that gdisk alone doesn't seem to be able to address. I found out that if you use gdisk to modify the GPT partition, you're likely to have the OSX installer complaining that it cannot boot from it.

To parition the disk, simply launch "Disk Utility" from the Utilities menu at the top. Then select the destination disk and click the "Partition" button.

Now if anyone knows how one can make the OSX Disk Utility to accept user specified sizes, I'd like to hear from you as any attempt I tried to create a 40 GB single partition for OSX, while leaving the rest of the disk unused have failed. Gotta wonder what the point of providing a "Size" field is when it is blatantly ignored. The trick I used then was to go through the various "n-partitions" options, and pick the one that creates a first partition (for OSX) that is as close as possible to the size I actually want. Then just go through all the extra paritions and change their Format to "Free Space".

The points you want to pay attention to are:
  1. Label your OSX partition with something descriptive - don't keep "Untitled 1" (though you can change that later)

  2. Select "Mac OS Extended (Case-sensitive, Journaled)" for the Format (it's an UNIX based OS, it IS case sensitive dammit!). Now, if you ever plan to modify your OS files from Linux, then you may want to select a non Journaled filesystem, as the current HFS driver from Linux does not offer write access on Journaled.

  3. MAKE SURE "GUID Partition Table" (a.k.a. GPT) is used by clicking the "Options" button. Though you should be able to get away with MBR, the whole point of this exercise is to have a GPT disk to play with, so that's what we're gonna use.

Once your partitioning is set double check that you are partitioning the right disk, click Apply and quit the Disk Utility program. The newly created partition should now appear as a valid destination target for the installation process, so just select it and click "Install" (or click Customize then install, as it may be a good idea to remove additional languages or printer support if you don't have use for it). You can now go grab a cup of your favorite beverage and come back when the installation process has completed.

Dude, where's my boot?

So the install process completed successfully, the machine rebooted (more or less as the reboot process can be a bit flaky), and iBoot sees the new OSX installation alright. "VICT..., oh wait, this doesn't boot at ALL! AGHHH!!!".

Of course it doesn't you silly! To reiterate what I pointed out at the end of the previous post, the OSX installation process installs a clean, unmodified set of kexts, therefore all our good work about supporting the RAID controller on the USB key has been ignored. For the time being, we'll just fix that as we previously did, but instead of accessing the USB key, we'll do it on the OSX target drive:
  1. Boot once more into your USB installation media with iBoot

  2. go to Utilities→Terminal

  3. issue the command "mount". It should list your OSX partition, indicate that it has write acces, and tell you where it is mounted (should be in /Volumes/Something)

  4. cd /Volumes/<your_osx_partition>/System/Library/Extensions/AppleAHCIPort/Contents

  5. Issue nano Info.plist (you do have nano on your installation key, right? If not, see the first post in this series).

  6. Apply the same change as we did in previous post, namely change the <string>0x01060100&0xffffff00</string> part to <string>0x01060100&0xffffff00 0x01040000&0xffff0000</string> and save the file.

  7. Now, as previously, we need to rebuild the kext cache. The only difference is that unlike the installation media, the cache for Snow Leopard is located at System/Library/Caches/com.apple.kext.caches/Startup, so the command you want to issue is:
    kextcache -v 1 -t -l -a i386 -a x86_64 -m /Volumes/<your_osx_partition>/System/Library/Caches/com.apple.kext.caches/Startup/Extensions.mkext /Volumes/<your_osx_partition>/System/Library/Extensions

  8. Quit the installer and restart the machine

  9. Remove the USB key and boot into iBoot. Then select your OSX partition. This time it should boot alright and finalize the installation process. Wilkommen, Bienvenue, Welcome to cabaret indeed! But keep that USB installation media handy, as you are likely to need it again very soon.

Installing OSX (Snow Leopard) + Linux (Slackware 13.37 x86_64) on a GUID/GPT disk, with Software RAID enabled (ICH8R) - Part 2

With both the iBoot CD and the OSX installation USB key rearing to go (see previous post), it's time to insert them both, and reboot, while making sure the CD/DVD drive is selected as your boot device. After a while, the iBoot boot selection screen should pop up, and provide you with the option to select "Mac OSX Install DVD" to boot from (which is actually our USB key) to start the installation process. NB: the name may be truncated, but if this is your first OSX install, just make sure you pick up the icon wit the Apple logo, as there should be only one.

Now, if your SATA controller was set to AHCI, or if you were using a JMicron SATA port, your target hard drive would likely be detected by the installer and you could actually proceed with the installation.
But of course, that wouldn't teach you how to add support for custom hardware on the fly, such as RAID SATA controller, would it, so where's the fun in that? Plus if you're reading this guide, you're trying to get an ICH controller in RAID mode supported.

In our case, if we try to iBoot our USB key, the installer won't see our target disk at all, so we need to address that on the USB key. The nice thing about our USB key procedure is that we can add AHCI support on the fly, and the producedure highlighted below should actually work for any SATA RAID controller (ICH9R, nVidia, whatever) that supports AHCI passthrough, not just ICH8R.

What's the big deal with AHCI vs RAID anyway?

As it turns out not much. The thing is, even in RAID mode, an ICH controller such as the one we use does AHCI passthrough. So why does the OSX installer see the disk in AHCI mode, but not in RAID mode?
Glad you asked. This is best illustrated by looking at the output from Linux's lspci both in BIOS AHCI mode and in BIOS RAID mode.

For the P5B Deluxe, here is what lspci -nn reports for our SATA ICH8R controller in AHCI mode:
00:1f.2 SATA controller [0106]: Intel Corporation 82801HR/HO/HH (ICH8R/DO/DH) 6 port SATA AHCI Controller [8086:2821] (rev 02)
And here is the same, but in RAID mode:
00:1f.2 RAID bus controller [0104]: Intel Corporation 82801 SATA RAID Controller [8086:2822] (rev 02)
The important things to notice is that both the class ID and the controller VID:PID have changed, from (0106, 8086:2821) to (0104, 8086:2822) respectively.
As it turns out, the 0x0106 class ID is what the Apple AHCI driver uses to find out if a SATA controller is one it should support through AHCI (well, it actually uses 0x01060100 with a 0xffffff00 mask as we will see shortly), so that's the reason why, as soon as our class ID changes to 0x0104, OSX says "Hey, I don't know how to access this type of controller!" and gives up.

That's all very well, but that doesn't tell me how to get my RAID SATA controller recognized

Patience my friend... The key to achieving anything is understanding how it actually works, which we've done, so now is the time to get to the good part.
As you may already be aware, getting hardware recognized on OSX is done through the addition or modification of kexts. A kext, which stands for Kernel Exttension is pretty much the OSX equivalent of a driver or a Linux module. Therefore, to make our RAID controller recognized by the AHCI driver Apple uses, we will need to modify the kexts from our USB installation media to trick the installer into recognizing our non-officially unsupported hardware.

The kext of interest to us is the AppleAHCIPort.kext (makes sense), that resides in /System/Library/Extensions/ (which you will often see abbreviated as S/L/E on hackintosh forums). Most specifically, what we will want to edit is the Info.plist file found in /System/Library/Extensions/AppleAHCIPort.kext/Contents.

If you look at this file you see that it contains a section:
<key>Chipset Name</key>
<string>AHCI Standard Controller</string>
<key>Vendor Name</key>
Well, well, well, is that our 0x0106 generic AHCI class right there?

Great, then we just need to change the <string>0x01060100&0xffffff00</string> part to <string>0x01060100&0xffffff00 0x01040000&0xffff0000</string> to add AHCI passthrough support to any RAID controller (if you do have an actual dedicated non SATA RAID controller on your platform, you probably shouldn't use such a blanket approach, but instead duplicate one of the later sections), save the file, reboot, and be done with it, right?

Rebuidling the kext cache

Not so fast! If you just edit and save the Info.plist on the USB key, your RAID controller still will not be detected.
The reason for that is that kexts are executed at the kernel level, so Apple doesn't simply let you load a modified kext like this. Instead, an additional step is needed, which is the rebuilding of what is known as the kext cache. This cache is the Extensions.mkext file located in your /System/Library/ directory (where the Extensions/ directory also resides).

Note that the procedure I am highlighting below can be done from an existing Snow Leopard OS, rather than on the USB key itself, but if you do so, be midnful that there's a whole business going on with regards to permissions and ownership, which, if you mount your USB key as anybody but root (not a problem when using the OSX from the key), may result in an error message such as:
AppleAHCIPort.kext is not authentic; omitting from mkext.
Authentication Failures:
File owner/permissions are incorrect (must be root:wheel, nonwritable by group/other):
/Volumes/Mac OS X Install DVD/System/Library/Extensions/AppleAHCIPort.kext/Contents/Info.plist
In our case, we'll run everything from the key, and the default execution level is that of the root user, so it should be fairly simple:
  1. Open a terminal window by selecting Utilities→Terminal in the top menu (notice that we haven't had to accept any EULA to do so, so you are free to edit your OSX installation media from the embedded Terminal in any way you like)

  2. Remount the USB key as read/write (by default it is mounted read only) by issuing the command:
    mount -u -o rw /
    Note: Make sure you don't close your terminal session after you do that, as Terminal will not launch if the system is already r/w (which is also the reason why we do it manually). You should be able to check that the USB file system is now mounted r/w by issuing mount

  3. If you haven't done so already, copy the nano text editor, which you should have copied into the root directory to /usr/bin with the command:
    cp /nano /usr/bin
    Or you can just leave it in root and issue /nano whenever you need to edit a file.

  4. Navigate to the Info.plist file we want to edit with cd /System/Library/Extensions/AppleAHCIPort.kext/Contents/ and run nano nano Info.plist

  5. Change the <string>0x01060100&0xffffff00</string> line in the <key>GenericAHCI</key> section to <string>0x01060100&0xffffff00 0x01040000&0xffff0000</string>

  6. Save the file (Ctrl-X then 'y' to save it)

  7. navigate to the /System/Library/ directory with cd /System/Library/. This is the directory that contains the Extensions.mkext file

  8. Rebuild the kext cache with:
    kextcache -v 1 -t -l -a i386 -a x86_64 -m Extensions.mkext Extensions
    It should complete without errors (or a benign warning about the JMicronATA.kext) but if there are any issues reported about the AppleAHCIPort.kext, they should be explicitly listed.

  9. Close terminal and leave the installer (Mac OS X Installer→Quit Mac OS X Installer) then select Restart. You must restart the installation process completely (i.e. reboot) for our changes to be applied.

With all the above completed, on the second iBoot/USB installer run, you should now see all hard drives recognized, including the ones connected in a SATA connector in RAID mode, with the ability to use them as installation destination. Neat!

Before shouting "victory" though, you should be aware that:
  1. When OSX installs, it will extract and install its own unmodified AppleAHCIPort kext (from the Essentials.pkg found in System/Installation/Packages/ directory on the installation media, so what we have done will only work during installation, but the installed OSX will not actually be able to see its disk and we'll have to work around that - bummer!

  2. iBoot offers the option to ignore the cache during bootup (press the down key on the OS you want to boot, and then select "Boot Ignore Caches"), so we probably could have gotten away with simply modifying the Info.plist file and used that iBoot option instead of going through this whole cache rebuilding exercise...

Anyway, in the next installement, we'll go through the actual OSX installation process. Stay tuned...


Installing OSX (Snow Leopard) + Linux (Slackware 13.37 x86_64) on a GUID/GPT disk, with Software RAID enabled (ICH8R) - Part 1

Now that's quite a mouthful of a title. And considering that there is much to discuss, I will break this whole thing into a multipart post, so here is the first part of our series.

Introduction & Requirements

First of all, I gotta start with the disappointing news that, despite what you might anticipate from the title, this is not a guide to setup motherboard RAID (a.k.a. fakeRAID) support in either OSX or Linux (though the later is no biggie - see previous posts here for various options). The mention of RAID here is to indicate that we want to do an OSX and Linux that coexists nicely with a previous fakeRAID installation, by NOT requiring you to disable RAID settings in BIOS if they are already enabled.

In practice here, the goal is to add OSX and Linux on a separate non-RAID disk to an existing Windows 7 installation that was set to use fakeRAID (RAID 1 from ICH8R), without touching the BIOS settings.

Basically, what happened is that I recently upgraded my good old Asus P5B Deluxe based rig to a dual Samsung F3 1TB, installed Windows 7 ontop of the ICH8R RAID 1 array which I created in BIOS as a result, and since that left me with a bunch of unused SATA drives, I thought I would put one of these to good use by installing both OSX and Slackware on it (that Slackware 13.37 sure IS nice!), and use it to boot the whole lot. Sounds easy, but it actually presents a nice set of challenges, some of which include:
  • Adding extra drivers to a vanilla OSX installation media, so that it recognizes our AHCI controller even in RAID mode
  • Allow LILO (or GRUB) to boot OSes other than the main Linux system on a GPT/GUID partitioned disk (with a BIOS that is NOT EFI)
  • Installing OSX Snow Leopard from scratch, including updates and necessary drivers for your rig, but WITHOUT having to spend one's life endlessly browsing the OSX86 forums. Yes, the effort is much appreciated guys, but I have better things to do then become an expert on kexts, Chameleon, DSDT, iBoot and whatnot.
My specs:
  • Asus P5B Deluxe with 2x 1TB SATA HDDs in RAID mode (ICH8R) and 1x 320GB HDD (non RAID)
  • nVidia GPU (7950GT)
  • PS/2 keyboard, USB mouse + IDE DVD (the DVD is only going to be used for iBoot, so SATA or IDE won't matter one bit)
  • OSX Snow Leopard official installation DVD or .dmg image (build 10A421A)
  • One 8 GB (or bigger) USB key
  • An existing Snow Leopard installation. Yeah, I know it sucks, but hey.
That last item means either you need to create a VMWare image from the DVD media, or borrow a friend's Mac, but we want to create a USB OSX installation medium, which we can customize so there's little way to bypass that requirement. If you really don't have access to a preexisting OSX system, it's probably possible to create an USB key from Linux from the OSX installation DVD, but I haven't tried it so I can't comment on that. Besides, a VMWare image with Snow Leopard shouldn't be that hard to find.

My kingdom for AHCI? Surely you are jesting!

Question: What do all the hackintosh installation guides you find on the internet have in common?
Answer: They all begin with "You MUST change your BIOS settings and set your SATA controller to AHCI before you begin the installation"

Of course, this is utter nonsense. If a guide begins with "You MUST change your BIOS settings to something potentially incompatible with what you want, and risk destroying existing data as a result" (by disabling RAID mode from ICHR), then it's a lousy guide and you should stop right there. For having been a fakeRAID/motherboard RAID user for some years now, with both Intel and nVidia solutions, I'll tell you frankly: all those who say "Don't use fakeRAID: buy a dedicated RAID controller instead" are idiots. First of all, money to buy a separate controller doesn't grow on tree and secondly, with all these cores lying around, the idea that one should spend extra, when all they want is add fault tolerance to consumer grade hardware is ludicrous. We're not building an enterprise grade server here dammit! Besides, most of these solutions are rather well supported in Linux now, which means that if you screw up your fakeRAID array, you may just be able to do something about it. With a hardware proprietary non reverse engineered RAID solution on the other hand, you might not be so lucky...

Therefore, the hell with changing your BIOS settings to AHCI! If Linux doesn't bat an eyelid about accessing SATA HDDs with an ICH controller in RAID mode, why should OSX be the exception? Therefore, we'll keep our ICH BIOS settings to RAID, thank you very much, and ignore all this AHCI brainwashing.

Now, since the OSX installation media has not been designed to support ICH8 in RAID mode, if you try something like iBoot and an unmodified OSX installation media, then unless your HD is plugged on a JMicron SATA controller (which is an actual possibility on the P5B Deluxe), the OSX installer will not see it. This basically means we're gonna have to modify the OSX installation system so that it sees our ICH8R controller as AHCI ICH8 and that's also where doing the install from an 8GB (or more) USB key, that we can modify on the fly, will come real handy.

But first, we gotta create the base vanilla installer on our key.

Creating a vanilla OSX installation USB key

To do that, you need to be in your existing OSX system (VMWare, soon-to-be-ex-friend's machine, etc.). I'm going to assume that you have the vanilla OSX disk as a .dmg. The steps to create the key then are as follow:
  1. Plug in your USB key. Don't worry if it automounts.
  2. Open the OSX disk utility (Applications -> Utilities ->Disk Utilities). You should see something like "8.02 GB Generic STORAGE DEVICE Media" or something, as your USB volume
  3. Click on the "Partition" button on the left, to access the partition tab, then click on the "Current" Volume Scheme button so that you get a dropdown, and select 1 partition
  4. In options, under the partition diagram, make sure you select "GUID Partition Table". For the Format, you select "Mac OS Extended" (though it shouldn't matter)
  5. Click Apply to partition & format the USB key
  6. Once formatting is complete, select the Restore button
  7. Now double click on your OSX .dmg image to mount it (or insert your OSX DVD). A "Mac OS X Install DVD" icon should appear on your desktop
  8. Drag that icon onto the Source field of the Restore parameters in Disk Utility until a green '+' sign appears, and then drop it there
  9. For the Destination field, just drag and drop the "Untitled 1" partition, or whatever you called it, from your newly formatted USB stick
  10. Click the "Restore" button. You will be prompted for the system's administrative password.
For those who want to see it for themselves, the whole process of creating an OSX installation USB key is demonstrated by the first part of the video below (credits go to stellarola - you can ignore the Stella Magic part that comes afterwards):

The restore process will take a few minutes (15-30 mins), but, once completed, will give you the ability to install OSX from USB using iBoot.

Installing a text editor

Because we're going to do it the hard way, a must have for our USB key is a text editor, so that we can tune the USB key to our needs during the installation process (more about this in the next post). As a UNIX veteran, I swear by vi, but I do understand that not everybody is confortable with vi sequences, so to alleviate the editing process, we'll go with the more intuitive GNU nano.

One might wonder why there is no default editor on the vanilla OSX installation media, but the answer is that Apple obviously considered an installer to be read-only affair, so it made little sense adding vi or nano to the installation utilities. On the other hand, we will very much require the ability to edit files on USB, so we need a text editor for OSX. Of course, since GNU nano is GPL and therefore, anybody can legally publish a nano OSX binary to copy onto an USB key, I'm not going think twice about putting the one I conveniently recompiled from source (using Xcode 3.2) and that I am putting online here.

Once downloaded, just drag and drop the "nano" file onto your USB stick folder (where you'll see the OSX installation paraphernalia). As long as it's on the key, we're good - no need to worry about placing it into bin/ or usr/bin/ just yet. Once it's copied, you can eject the USB, as at this stage, we won't need to access another OSX system: we'll just do everything we need from USB.


Finally, the last piece we need for the installation is the latest iBoot CD (bugmenot). And while on the iBoot download page, you might as well download MultiBeast, as it will come handy to finalize our installation.

The iBoot zip basically contains an ISO which you should burn to a CD. Note that if you can't use a CD, you should be able to use that ISO image to produce an alternate iBoot loader (eg use a second USB key or TFTP), but I'm not going to detail that.


Why encrypted firmware is bad

A simple example will suffice.

Let's say I own a digital camera, or a device that contains a digital camera, that uses encrypted firmware.

Now let's say that this firmware has been written in such way that, at regular intervals, and without any form of notification to the user (like a LED blinking or a tell-tale shutter noise), it takes low resolution pictures of what it sees, and stores these unrequited pictures into its embedded storage (which too would be encrypted).

Then, when the user takes a normal high res picture using the digital camera, the firmware adds "noise" to it that contains encrypted image(s) from the hidden low-res pictures it has in memory.

With hi-res image files in the MB range or even better: digital video, it has become exceedingly easy to add "meaningful noise" and play a little steganography on digital photography. There are also ways to ensure sure that an useful data payload can still be recovered, even if the user downsizes the images.

Blissfully unaware of this, the user then uploads some of these high res picture to a public website, as one does. Then malicious entities/government agencies can just run a search on EXIF (if the user didn't remove this data, but even then, with the proper resources, parsing images all day to check for known steganographic payload is not that big a feat) and spy on you at length, provided you keep sharing pictures with friends, etc, and without your knowledge...

Of course, this kind of far fetched scenario would never happen... just like printer manufacturers would never add hidden marks to every page printed, that would uniquely identify what printer (and by extension who) printed some data.


Formatting a PSP Memory Stick for use with a Pandora battery in Linux

Always a pain to do, and nobody seems to provide the files I want, so I'll just provide my own files (using ipl_ms.bin => no frills, just normal boot) and a short script.

Once extracted, just run something like:
root@sheeva:~/pandora# ./format_ms.sh /dev/sda
+ dd if=/dev/zero of=/dev/sda bs=512 count=32
32+0 records in
32+0 records out
16384 bytes (16 kB) copied, 0.00699201 s, 2.3 MB/s
+ parted /dev/sda
GNU Parted 1.8.8
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel msdos
(parted) mkpartfs primary fat16 32s -1s
(parted) set 1 boot on
(parted) set 1 lba off
(parted) u s
(parted) p
Model: Generic STORAGE DEVICE (scsi)
Disk /dev/sda: 3995648s
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 32s 3995647s 3995616s primary fat16 boot

(parted) q
Information: You may need to update /etc/fstab.

+ dd if=ipl_ms.bin of=/dev/sda bs=512 seek=16
4+1 records in
4+1 records out
2288 bytes (2.3 kB) copied, 0.00179176 s, 1.3 MB/s
+ mount /dev/sda1 tmp_mnt
+ tar -C tmp_mnt -xvf ms.tar
+ umount tmp_mnt
+ sync


That darn iBFT iSCSI Windows installation error

If you ended up here, it's probably because you too tried to install Windows 7 or Vista on an iSCSI bootable disk using gPXE and, even though Windows setup could see your disk alright, you got one of the following errors:

"Windows cannot be installed to this disk. Setup does not support configuration of or installation to disks connected through a USB or IEEE 1394 port" (Vista)

"Windows cannot be installed to Disk <#> Partition <#>. (Show details)" -> "Windows cannot be installed to this disk. iSCSI deployment is disabled since no NICs referenced in the iBFT can be resolved to actual NT-visible devices. Windows cannot be installed to this disk. This computer's hardware may not support booting to this disk. Even if you're probably smart enough to know what you're doing, and we could definitely let you install to this disk to sort booting later, we're going to be asses about it and prevent you from overriding the idiotic setup decisions we made (by the way, did we mention the Windows recovery partition yet?). Why? Because we're Microsoft and screw you, that's why." (Windows 7)

OK. Let's forget about Microsoft's stupid decisions for a while, and attempt to work within them, to figure out how we can address the issue.

Firs of all, if you see the message above, then I can guarantee that, no matter how properly you think you did setup your iSCSI PXE boot, you screwed up something and your iSCSI boot sequence is wrong.
And yes, getting an iSCSI boot error on a blank disk is to be expected, but NO, not all the errors you see during gPXE iSCSI boot can be safely ignored (if you manage to see them at all, but we'll come to that). There's good error and there's bad error.

The first thing I'll point out, if you're like me and thought you could get your dchp/tftp server to:
  1. Supply the iSCSI disk boot parameter (that dhcp-option=net:gpxe,17,"iscsi:" line or similar, along with the keep-san option)
  2. Attempt to boot from it and
  3. If that fails, fallback to executing WinPE/pxleinux to launch a WinPE installation image
is that such a scheme just won't work. Unless you're fiddling with the gPXE scripting options (and even then), you can only have either
  • Boot from iSCSI then fail and hands things over back to BIOS, or,
  • If a boot image is specified, ignore the iSCSI options provided by the dhcp server altogether and just boot from that image.
You can't just have the dhcp/tftp server alone tell gPXE: "try to boot to iSCSI and if that fails, boot from something else while keeping the iSCSI boot options", to boot WinPE for installation onto a blank iSCSI disk for instance. No siree. If you tried that, well, that was your first mistake. Not to say that this can't be achieved at all (we'll see how to do just that from the commandline below, and, in a next post, I'll try to show you how to do it automatically as well), but that can only be achieved outside of the dhcp/tftp options.

In short, if you're using dnsmasq and with something like:
dhcp-match=gpxe,175   # tags the request with net:gpxe if gPXE was supplied
dhcp-option=175,8:1:1 # turn on the keep-san option (allows installation)
dhcp-boot=net:#gpxe,pxelinux.0 # if NOT (#) gPXE, use pxelinux.0
dhcp-boot=net:gpxe,Boot/startrom.n12 # if gPXE, use WinPE
Then, when WinPE boots, it will not have any of the options that you think gPXE should have fed it with regards to the iSCSI boot disk. Especially, the "dhcp-option=net:gpxe,17," option will be completely ignored. Yeah, that makes as much sense to me to as anybody else, but that's how gPXE works for now.

And that's also the reason why, in most of the guides you see, they'll tell you to first try to boot from an unbootable iSCSI disk with gPXE, let it fail and then use BIOS fallback to boot from an installation CD or DVD. Again, simply chaining WinPE in there from PXE does not work without additional effort that none of these guides provide.

Also, and this is the most important part if you want Windows install to accept your iSCSI disk as bootable, as long as you do not see the following lines during boot:
Booting from root path "<your iSCSI path>"

Registered as BIOS drive 0x80
Booting from BIOS drive 0x80
Boot failed
Preserving connection to SAN disk
Then it's game over, plain and simple.

Granted, those line may be hard to spot at during boot, when gPXE will hand things back over to the BIOS on failure (which it should do, if you followed what I said above), as those darn BIOS makers forgot that the Pause key we have on our keyboards could be put for some good use, but if you try a few times, and you don't see any mention of a BIOS drive 0x80, Windows will simply not see your iSCSI driver as bootable, simple as this.

For your reference, here's a screenshot from a VMWare diskless machine that illustrates what you should see when gPXE executes:

As long as you see the lines I highlighted above, after the iSCSI boot attempt, whatever error is thrown out will come from the iSCSI disk itself, rather than your boot process, so you can ignore it. But if you don't see the "Registered as BIOS drive" line from gPXE however, you should pay very close attention to the iSCSI error you get.

So, of course, now your question is: "I'm not seeing these lines (or they're too fast for me to see). How then can I validate that my iSCSI target is good, and that it can be used for installation with gPXE?"

Well, duh, through the gPXE commandline of course, which you can enter with Ctrl-B at boottime. Gotta wish proprietary PXE was as easy to troubleshoot for power users as gPXE is. But you're in a hurry and don't want to learn about the whole gPXE/DHCP/TFTP internals, so I'll cut down to the chase. The sequence of command you are after:
dhcp net0
set keep-san 1
sanboot iscsi:<iscsi server ip>::<iscsi port>:<iscsi lun>:<iscsi target id>
# and if the above line works and you want to boot to WinPE for instance, you could
chain tftp://<server ip>/Boot/startrom.n12
  • dhcp net0 initializes DHCP and allows you to communicate with the server (for tftp, etc)
  • then the keep-san option is to ensure that Windows can see the iSCSI disk as bootable, which of course is the feature you're after
  • finally the sanboot line is the one that will tell you if something is wrong with your iSCSI access.
But first, let's see an example of what happens when everything works as expected (for an uninitialized disk):

Here, we have the Registered as BIOS drive and the Preserving connection lines so we're good. You might also want to note that I am specifically specifying that I'm using port 3260 (default for iSCSI) and that my device is on LUN2 (very non default).

Now, let's see some common errors:

0x2c0d603b is usually an indication that your iSCSI path is wrong. In the case above, I used the non-existing disk0 instead of disk1 for the target part.

Ah, 0x1d704039 (and now, aren't you glad you found this page)...
Yes, this is an error you should not get, even with a non bootable iSCSI disk. And yes, I agree, that an I/O error is precisely what you'd expect from a non-bootable disk, but actually, that I/O error is unrelated to the disk being bootable or not. On the other hand, it has very much to do with trying to use an iSCSI device that cannot be used as a disk, which, if you are using Linux tgtd/tgtadm is exactly what you're going to get if you're leaving the sequence of four columns as is (::::) because that means that LUN 0 will be used, and LUN 0 is reserved by tgtd for the virtual controller.
In short, if you're using tgtd, your actual device LUNs start at 1, and if you're keeping the options part as '::::', then the default of LUN 0 will be used, so you're not actually accessing your disk!
This is why I'm using iscsi: in the line that works, because I'm trying to access the second disk I created on that specific target. Even if that was the very first disk I created with tgtadm, I would have to add 1 for the LUN in the line above, because the default of 0 is not a disk.

Then, other errors you might get are 0x0b8080a0 (Operation cancelled) or 0x2e852001 (Exec Format Error), but these should occur after you get the "Registered as BIOS drive" line, so you should be able to safely ignore them. For other errors, google is your friend.

So, to summarize, if Windows doesn't like your iSCSI boot device, it's probably because, despite what you think, gPXE didn't find anything it could use as a bootable disk, and to find out why, you should try to boot from it using the gPXE low level commands.

In a next instalment, we'll see how we can create a nice iSCSI aware WinPE image, that we can launch from PXE, for all of our installation needs, and how we can solve the problem of automating WinPE fallback from a non bootable iSCSI disk, as well as how we can use pxelinux to boot from multiple iSCSI disk.


chrooted ssh & sftp on Slackware

Since the SheevaPlug makes a nice server for one to share files online securedly, while allowing you to see exactly who is accessing them, today's exercise is to setup sshd so that selected people can SFTP into it, while only seeing what you want them to see.

Now, you'll find plenty of articles on how to do that, some of them very well made, but what they won't tell is how to sort things out when stuff's not exactly working as it should be.

First of all, this is what you want to have in your sshd_config:
# Use a non obvious port for outside connections.
# You could also do port translation on your gateway or something
# but, just so you see how it's done:
Port 22
Port 1234
Default options should be super limiting: no X11/TCP forwarding, no pasword auth, no root logon, etc) and then you use the very convenient Match tag to setup the allowed remote users, as well as the less restrictive options for your own network. Thus:
# only "remote_user" can logon from the outside network, in a chrooted env
Match User remote_user
PasswordAuthentication yes
ChrootDirectory /home/chroot
# connections from inside are OK
Match Address
PasswordAuthentication yes
PermitRootLogin yes
AllowTCPForwarding yes
X11Forwarding yes
Then I'll assume that you pretty much follow the link I gave above to setup your /home/chroot directory and create the "remote_user" guy.

First bad surprise:
"cannot run command `/bin/bash': No such file or directory"
WTF? But I did copy bash and the script took care of my libraries.
Unfortunately, nope, the script did not take care of all the libs, and instead of reporting "library not found", the error message does not help.

Now, if you go:
# ldd /bin/bash
libtermcap.so.2 => /lib/libtermcap.so.2 (0x4004b000)
libdl.so.2 => /lib/libdl.so.2 (0x40056000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x40061000)
libc.so.6 => /lib/libc.so.6 (0x40075000)
/lib/ld-linux.so.3 (0x40000000)
and check your libraries again, you'll probably find that ld-linux.so.3 was not copied. If you do just that, you should find that you can logon to the chrooted environment at last. Yay!

But then you want to add sftp. To do that, you *must* have the sftp-server and the lib dependencies in your chrooted environment as well.

For Slackware, that means you need to create a /home/chroot/usr/libexec/sftp-server (if you don't know which sftp-server to pick, check the line "Subsystem sftp" in your sshd_config) and once again, copy all the library files (or use the script).

But then, disaster strikes: whenever you try to logon with sftp, you get your connection closed. If you look at the sftp debug log, you'll see something like:
debug1: Sending subsystem: sftp
debug2: channel 0: request subsystem confirm 1
debug2: callback done
debug2: channel 0: open confirm rwindow 0 rmax 32768
debug2: channel 0: rcvd adjust 131072
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug2: channel 0: rcvd eof
debug2: channel 0: output open -> drain
debug2: channel 0: obuf empty
debug2: channel 0: close_write
debug2: channel 0: output drain -> closed
debug2: channel 0: rcvd close
debug2: channel 0: close_read
debug2: channel 0: input open -> closed
debug3: channel 0: will not send data after close
debug2: channel 0: almost dead
debug2: channel 0: gc: notify user
debug2: channel 0: gc: user detached
debug2: channel 0: send close
debug2: channel 0: is dead
debug2: channel 0: garbage collecting
debug1: channel 0: free: client-session, nchannels 1
debug3: channel 0: status: The following connections are open:
#0 client-session (t4 r0 i3/0 o3/0 fd -1/-1 cfd -1)

debug3: channel 0: close_fds r -1 w -1 e 6 c -1
debug1: fd 0 clearing O_NONBLOCK
debug3: fd 1 is not O_NONBLOCK
debug1: Transferred: stdin 0, stdout 0, stderr 0 bytes in 0.3 seconds
debug1: Bytes per second: stdin 0.0, stdout 0.0, stderr 0.0
debug1: Exit status 1
Connection closed
Well, to cut a long story short, if you're seeing this, the problem is likely that you don't have rw permission on /dev/null (and possibly /dev/zero). Just make sure you issue a chmod 777 /home/chroot/dev/null and that should be the end of it.


Resync'ing a RAID 1 array

Can come handy when attempting to correct unreadable sectors on a RAID disk. Eg. if your RAID1 array is built around /dev/sda3 and /dev/sdb3 and you had unreadable sectors reported by SMART on /dev/sda (Current_Pending_Sector), but got no error during the SMART extended self test, you might want to resync the /dev/sda partitions on /dev/sdb as follows:

mdadm --fail /dev/md2 /dev/sda3
mdadm --remove /dev/md2
mdadm --add /dev/md2 /dev/sda3


Changing the NTP update interval in Windows

Default is 604800 seconds (7 days), which is way way too long. On my machine, I'm getting clock skews of close to a minute as a result, and that's not acceptable when you're swapping files around for compilation for instance.

To set this value to a more reasonable interval, you need to update the following key:


Windows 7, I hardly knew ye...

Blah, blah, blah, Windows 7 is great, blah, blah...

Oh, you were actually expecting something in that range? Not on this blog I'm afraid.
When Windows 7 consistently freezes within 4 hours of a fresh install (happened on Vista too, so I don't think it's a Win 7 issue only. Past that initial freeze the system is stable enough though), needs a full reinstall less than one month after it, and putting the computer to sleep or rebooting is akin to Russian roulette (will it boot again, or will just remain frozen forever and require, not only a hard reset, but a complete PSU unplug? - heck, I have a hackintosh OS-X working better than that on hardware that was never meant to be supported by Apple - go figure!)

And now, once again, it's time for a clean Windows 7 reinstall, which means reinstalling all the apps, and having to do that every few months to keep the hardware in running order (because I actually DO something with my machine you know, like installing drivers by the truckload for development purposes - it's not just for internet and multimedia) is getting a bit old.

Today's tip then is then is how to avoid reinstalling the WinDDK, when you still have the files (in E:\WinDDK\7600.16385.0\ for instance) and all you are interested are the build environment shortcuts.
Windows 7 x64 free build environment shortcut:
C:\Windows\System32\cmd.exe /k E:\WinDDK\7600.16385.0\bin\setenv.bat E:\WinDDK\7600.16385.0\ fre x64 WIN7 no_oacr
Same thing for x86:
C:\Windows\System32\cmd.exe /k E:\WinDDK\7600.16385.0\bin\setenv.bat E:\WinDDK\7600.16385.0\ fre x86 WIN7 no_oacr
And for reference, setenv usage:
Usage: "setenv  [fre|chk] [64|x64] [WIN7|WLH|WXP|WNET] [bscmake] [no_oacr] [separate_object_root]"



Well, like many, I was convinced that HDMI-CEC was the wave of the future with regards to AV automation, until I actually read the specs of the thing (in "Supplement 1 Consumer Electronics Control (CEC)" of the HDMI Specifications version 1.3a).
The interesting bits are to be found in "CEC 5 Signaling and Bit Timings" and "CEC 6 Frame Description", where, if you do the maths, you actually find out that the shortest time it takes to transfer a single 8 bit packet (start bit + 10 bit header + 10 bit packet) is 52.5 ms. And if you were to transfer 32 bits of data, you get close to 125 ms, or, in other words, no more than eight 32 bit data packets per second.
Heck, I can mash a remote button faster than that!

What this means really is that. no matter how cool HDMI-CEC might look, with a max data rate of only 41 bytes per second, future proof it ain't.
Thinking of transferring a GUI menu content from one device to another using CEC? Won't happen.
Thinking of transferring a reasonable amount of text in a short time? Nuh-huh.

For a format designed in 2002 to be more than 20 times slower than what has been the de-facto lowest speed for any form of serial data communication (9600 bauds) for more than 30 years now, you really got to be kidding us.
And it's not like either the cable or devices connected can't support high transmission speed (it's HDMI - both the cable, and the devices it connects were designed for high speed!).

So really there you have it. On one hand, a handful of transmission lines that can transfer at least half a Gigabyte of data per second, and in the same cable, a puny line that will barely transfer 1/1,000,000th of it, because of our regular corporate overlords' lack of vision (guys: if you're going to borrow what SCART has been doing for more than 2 decades, at least bring it up to speed).

Oh the irony!


Creating a Visual Studio 2008 application that uses Cairo

One day or another, you'll want to produce quick 2D data with your code for visualization, and after looking through hordes of libraries that do not satisfy your needs at all (listen, it's not that hard: I don't want graphing, I don't want 3D, and I certainly don't want proprietary closed source paying software! I just want something that gives me a simple canvas to draw elementary stuff like text fields or simple graphics using 2D coordinates), you'll settle down on Cairo as the best trade-off for quick and easy generic 2D output. If it's used by Firefox for SVG output, it certainly should satisfy our needs.

Unfortunately, or should I say, as usual, whenever you want a nice step by step tutorial on how to get you started with using Cairo on Windows, you'll get nothing but a handful of incomplete snippets here and there. This short step by step tutorial attempts to remedy that.

For this exercise, we'll just create a C console application that outputs "Hello, World" to a PNG file using Cairo using Visual Studio 2008.

  1. Create a new Win32 Console Application in Visual Studio. Let's call it cairo_test. And since we don't wanna get bogged down by Microsoft's crap on a simple hello world app, on Application Settings -> Additional Options, make sure you check "Empty Project".

  2. Create a new source file - let's call it main.c - and fill it with the following content (which I picked up from here):

    #include <cairo/cairo.h>

    int main(int argc, char** argv)
    cairo_surface_t *surface;
    cairo_t *cr;

    surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, 240, 80);
    cr = cairo_create (surface);

    cairo_select_font_face (cr, "serif", CAIRO_FONT_SLANT_NORMAL, CAIRO_FONT_WEIGHT_BOLD);
    cairo_set_font_size (cr, 32.0);
    cairo_set_source_rgb (cr, 0.0, 0.0, 1.0);
    cairo_move_to (cr, 10.0, 50.0);
    cairo_show_text (cr, "Hello, World");
    cairo_destroy (cr);
    cairo_surface_write_to_png (surface, "hello.png");
    cairo_surface_destroy (surface);

    return 0;
    Don't worry about the "#include <cairo.h>" for now, we'll sort it out in a second

  3. Download the latest Cairo Dev package files for Windows from http://www.gtk.org/download.html by picking up either Windows 32 or 64. At the time of this article, the latest Dev is cairo 1.8.8.
    Extract the package directories "lib" and "include" at the root of your Visual Studio project. You can safely ignore the other directories from the archive.

  4. Change the Active configuration if needed and right click on your project in the Solution Explorer panel to access the properties page.
    • In Configuration Properties -> C/C++ -> Additional Include Directories, create a new entry and point it to "<your project root>\include". Be mindful that there is a cairo subdirectory there, which is why we used cairo/cairo.h in our source. Just make sure the source and your include paths match.
    • In Configuration Properties -> Linker -> Input -> Additional Dependencies, type "cairo.lib"
    • In Configuration Properties -> Linker -> Gener al -> Additional Library Directories, create a new entry and point it to the "<your project root>\lib" directory you extracted above. (Oh, and why oh why are the Additional Dependencies and Additional Library Directories on 2 different pages Microsoft?!? Where's the twisted UI logic behind that?)

  5. Try to compile your project. It should complete without errors. Note that if you picked up the Windows 64 libraries, you MUST create a new x64 configuration, or the process will fail with "fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'X86'". Just follow these guidelines if you don't know how to do that.

  6. Bet you didn't wait and tried to run your executable already. And of course, you got the DLL not found errors. Why of course, now you need to install the bunch of DLLs Cairo needs to be happy. Basically, what you should do is pick up the DLLs from ALL the binary packages below (still provided by the GTK+ Windows 32 or Windows 64 Project binaries) and extract them into the Release or Debug directory that contains your executable. For all of the archives, you j ust need to extract the DLL - all the other files are irrelevant:
    • cairo Binaries (yes you need the binaries package too, as the Dev one doesn't contain the DLL) -> libcairo-2.dll
    • zlib Binaries -> zlib1.dll
    • libpng Binaries -> libpng12-0.dll
    • Freetype Binaries -> freetype6.dll
    • FontConfig Binaries -> libfontconfig-1.dll
    • expat Binaries -> libexpat-1.dll
    If you're gonna produce JPEG or TIFF images with your application, you probably want to install those DLLs too, but you already guessed that.

  7. Now you can actually run your test program. It should produce an "hello.png" file that looks like the one below:

Alrighty then. Now you can get going and visualize the hell out of whatever ground-breaking application you've been thinking of!