Edit: this post and my questions within were poorly formulated, mostly because I made the assumption that there is a correlation between common word sizes in CPU architecture and why I couldn’t find decimal to signed binary converters online that allow me to set “word size”/number of bits I want to work with.
I am a complete beginner in the field of computers.
I am reading Code - the hidden language of computer hardware and software by Charles Petzold (2009) and I just learned how we electronically express the logic of subtraction without using a minus sign or an extra bit to indicate positive/negative: we use two’s complement (yes, I realize that the most significant bit incidentally acts as the sign bit, but we don’t need an extra bit). Anyway, I experimented with trying to convert both decimal and binary values into their signed counterparts, just as an exercise. To be sure that I wasn’t doing anything wrong, I wanted to double check my calculations with some “decimal to signed binary calculators” on the Internet.
I was trying to express -255 in signed binary using 10 bits. I wanted to use only 10 bits because I wanted to save on resources. To express the 1000 possible values between -500 and 499, I only need 10 bits, which unsigned goes between 0 and 1023. I calculated -255 to be 1100000001 in 10-bit signed binary (because 255 is 0011111111, which you invert to get to one’s complement and finally you add 1).
I couldn’t find any converters on the Internet that allows me to set the maximum value/length, in this case 10 bits. I found a few that are 8 bit and a few that are 16 bit, which made me think of our gaming consoles that to my knowledge evolved in increments of 8, 16, 32, 64.
I understand that we use binary to express Boolean logic and arithmetics in electronics because regulating voltage to have transistors be in one of two values is consistent with the true/false values of Boolean logic and because of the technical difficulties in maintaining stable voltages in ternary and above.
But why didn’t I find any converters online that allow me to set the bit length? Why did the gaming consoles’ maximum bit length evolve in those specific increments? Are there no processor architectures of other values than these?


Thanks! I have no idea what endianness is, except for hear “big endian” in some CS-related presentation a while back… I’ll read up on it!
As for my questions and your answer, would it be correct to say then that it’s about scalability? That one byte being eight bits scales efficiently in binary?
Kind of, but in his case it’s all about human scalability. 8 bits turns out to be a convenient chunk to encode characters in. ASCII is 7 bits, but it turns out to be only useful for things in the Latin alphabet. System designers decided that it was worth retaining the 8th bit (even if it was unused in flat text files). There is a “extended” 8-bit ASCII standard but the 7-bit standard was always more widespread. Why arent all of our bytes 7 bits, then? I stand by my personal theory that it is because it is very easy to represent the full range of 8 bits in Hex.
Later on, the Unicode folks brought some utility to that 8th bit. UTF-8 is an encoding that mirrors ASCII in the lower 7 bits, but can be extended into multi-byte characters and represent other scripts too. An overwhelming amountof Internet content is actually encoded in UTF-8. These will render correctly in an editor that only understands 7-bit ASCII, except for some things like the Euro symbol, which are multi-byte constructs that require that 8th bit in order to be recognized.
So maybe in addition to looking into Endianness, you should spend some time reading up on Unicode and it’s history to get to the answer you are looking for.
Amazing! Seems I posted in the right “sub”. I’ll check out Unicode tonight, perhaps as a “prelude” to endianness. :)