Edit: this post and my questions within were poorly formulated, mostly because I made the assumption that there is a correlation between common word sizes in CPU architecture and why I couldn’t find decimal to signed binary converters online that allow me to set “word size”/number of bits I want to work with.

I am a complete beginner in the field of computers.

I am reading Code - the hidden language of computer hardware and software by Charles Petzold (2009) and I just learned how we electronically express the logic of subtraction without using a minus sign or an extra bit to indicate positive/negative: we use two’s complement (yes, I realize that the most significant bit incidentally acts as the sign bit, but we don’t need an extra bit). Anyway, I experimented with trying to convert both decimal and binary values into their signed counterparts, just as an exercise. To be sure that I wasn’t doing anything wrong, I wanted to double check my calculations with some “decimal to signed binary calculators” on the Internet.

I was trying to express -255 in signed binary using 10 bits. I wanted to use only 10 bits because I wanted to save on resources. To express the 1000 possible values between -500 and 499, I only need 10 bits, which unsigned goes between 0 and 1023. I calculated -255 to be 1100000001 in 10-bit signed binary (because 255 is 0011111111, which you invert to get to one’s complement and finally you add 1).

I couldn’t find any converters on the Internet that allows me to set the maximum value/length, in this case 10 bits. I found a few that are 8 bit and a few that are 16 bit, which made me think of our gaming consoles that to my knowledge evolved in increments of 8, 16, 32, 64.

I understand that we use binary to express Boolean logic and arithmetics in electronics because regulating voltage to have transistors be in one of two values is consistent with the true/false values of Boolean logic and because of the technical difficulties in maintaining stable voltages in ternary and above.

But why didn’t I find any converters online that allow me to set the bit length? Why did the gaming consoles’ maximum bit length evolve in those specific increments? Are there no processor architectures of other values than these?

  • emotional_soup_88@programming.devOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 days ago

    Thanks! Your answer led me to this, which kind of explains it:

    https://en.wikipedia.org/wiki/Word_(computer_architecture)

    Character size was in the past (pre-variable-sized character encoding) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word, which is also a good size for the numeric properties of a floating point format.

    After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters, the standard size of a character (or more accurately, a byte) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used.

    So it has to do with character size, earlier six bits and today one byte/eight bits.