Page 1 of 1

Color encoding query

PostPosted: Tue May 15, 2012 1:17 pm
by Simon Carter
Hi,

I'm implementing some code to write OME metadata to tiff files and have a question about colour encoding.
http://git.openmicroscopy.org/src/devel ... nnel_Color says that 'The default value "-2147483648" is #FFFFFFFF so solid white (it is a signed 32 bit value)'

On Intel platforms (int32)-2147483648 is stored as 0x8000000, not 0xffffffff.

What encoding should I be using? I note that previous versions of this schema had an unsigned default value.

Re: Color encoding query

PostPosted: Tue May 15, 2012 1:27 pm
by rleigh
Hi Simon,

You most likely want to be using a big-endian int32 here. This is because the native Java int type used for writing the spec is big endian. You can convert these yourself using simple bitshifting/masking, or use the standard library functions for this. If you're using Linux, have a look at the endian(3) functions, or more portably (but less comprehensive) the byteorder(3) functions.

It's anticipated that we will be moving to a string representation in the future, which will avoid this type of platform dependency and portability issue.


Regards,
Roger

Re: Color encoding query

PostPosted: Tue May 15, 2012 1:43 pm
by Simon Carter
Thanks - though I think I must be being dense.

If Intel stores -2147483648 in memory 00 00 00 80 - interpreted as 0x80000000 - then this still bears no relation to 0xffffffff, regardless of endian-ness.

What have I misunderstood?

(I'm using C# on 64 bit Windows, btw).

Re: Color encoding query

PostPosted: Tue May 15, 2012 11:35 pm
by rleigh
Hmm, I've tried to get this to work as documented, but I'm also struggling. I've tried both Java and C++ on a big-endian and little-endian architecture, and not been able to get the documented values unless there is some additional transformation needed:

Java:

class test {
public static void main(String[] args) {
int n = 0xFFFFFFFF&0xFFFFFFFF;
System.out.println(n);
}
}

==> -1

class test {
public static void main(String[] args) {
int n = 0x80000000&0xFFFFFFFF;
System.out.println(n);
}
}

==> -2147483648

I get exactly the same using C++. I'll have to double check the spec and colour stuff and get back with some more information tomorrow.

Re: Color encoding query

PostPosted: Wed May 16, 2012 10:49 am
by rleigh
I have opened a bug about this issue. See http://trac.openmicroscopy.org.uk/ome/ticket/8807

To follow up from yesterday:

The comment and default in the XML schema stating that "The default value "-2147483648" is #FFFFFFFF so solid white (it is a signed 32 bit value)" is incorrect. This is due to a historical bug in the schema, which is explained further in the above bug report.

The correct value for 0xFFFFFFFF is -1. This should hopefully be changed soon to correct the documentation mistake.

So in order to compose a correct colour value from r g b a values:

Code: Select all
uint32 composed = (r << 24) | (g << 16) | (b << 8) | (a << 0);
int32 *signed = (int32 *) &composed;


Not sure how you do that in C# I'm afraid; the bottom line is just casting the uint32 to an int32 representation with no changes to the bit pattern. But it looks like this is what you were doing correctly already though, so it's just a matter of shifting and ORing the values together.

Just to clarify my earlier comment, the use of a signed integer is a known deficiency in the model, and this should be corrected in a future version of the schema. While I mentioned this might be a string representation, the actual form of the replacement of the signed integer format has yet to be decided. It might potentially be separate numbers for the RGBA values, for example.


Regards,
Roger

Re: Color encoding query

PostPosted: Wed May 16, 2012 11:03 am
by Simon Carter
Thanks for the clarification. There are countless ways of doing the bit-twiddling and casting in C#.

Simon