One of our customers who is using Ruby on Rails with the PostgreSQL database backend notified us of a long-standing issue with the database driver: Lighthouse ticket #3721
The driver is doing something that at first glance looks like a smart thing, but apparently it's not that smart. What it does is it tries to detect the bit-string notation (binary vs. hexadecimal) based on the string content passed to it. Let's take a look at the code:
elsif value.kind_of?(String) && column && column.sql_type =~ /^bit/ case value when /^[01]*$/ "B'#{value}'" # Bit-string notation when /^[0-9A-F]*$/i "X'#{value}'" # Hexadecimal notation
So what's so bad about it? The problem is that with this approach you just cannot reliably use hexadecimal notation to store arbitrary bit-strings. Because if the data you are storing happen to have only zeros and ones in the hexadecimal notation, then the not-so-smart driver will think it's binary notation and choose the wrong path. This is especially troublesome when varying bit-strings are used, because instead of getting database error you'll get your data corrupted.
So how do we fix that?
The issue reporter already provided a few options: drop bit-strings support altogether; support only one notation; or only accept strings tagged with the base prefix. The first one doesn't really fix the problem, apparently. Let's have a closer look at what's left.
In Ruby there is a notation for both binary and hexadecimal numbers: 0b[01]+ for binary, and the usual 0x[0-9A-F]+ for the hex (sans the letters case.) So my thought was that it would be really nice to support the same prefix tags to determine the bit-string base. Unfortunately this is not going to work, because of that 0b: it has a huge potential to be mistaken for hexadecimal number by either older driver code or a human being.
Now to supporting only one notation. Which one to keep? Well, since the bug was not fixed for a long period of time, most people must be unaware of it and are not affected. So this means, most people must be using binary notation, which doesn't have the potential for data corruption. Another point is that even if you can store the bit-string in either of notations, the fetched value is always a binary bit-string.
Unfortunately, any approach to fix this problem potentially breaks some code. But I had to choose one which does it in the least inappropriate way. To my mind, keeping only binary notation is clearly the winner here: it doesn't change behavior for unaffected code in any way, yet any affected code will fail prominently signalling of a problem.
After that, the patch was fairly easy to create (not posting here: it is attached to the ticket, so anyone interested may have a look.) That's it—see you next time. :)