Skip Navigation

Signed Integers

We have seen that we can count numbers using our fingers, various types of robot hands, and finally using switches to represent numbers in binary (base 2). So far, we have only been counting whole numbers, which start at zero and count upward. In this article, we’re going to look at how we represent negative numbers. We’re only going to look at integers for now, which are whole numbers and their opposites.

Page Contents

Video Lecture


Watch at Internet Archive

The Sign Bit

The trick to enabling negative numbers in binary is to use the first bit, corresponding to the first digital switch, as the sign bit. If the sign bit is 1, then the number is negative. Otherwise, if the sign bit is 0, the number is positive. This approach leaves the remaining bits to determine the number, so it reduces the maximum value that can be counted.

For example, assume that we have 8 bits, with all of them turned on: (1111 1111)2. We know from previous counting exercises that we can count to 255. The bit farthest to the right counts units of 20, or 1. The next bit counts 21, or 2. Following in succession, we have 22, 23, 24, 25, 26, and finally 27, or 128 (when no sign bit is used). If the 27 bit is zero, but all the other bits are 1, the resulting number will be (0111 1111)2:

26 + 25 + 24 + 23 + 22 + 21 + 20 =
64 + 32 + 16 + 8 + 4 + 2 + 1 = 127

Therefore, if we didn’t use the first bit, we could count as high as 127. But what happens if we want the first bit to be the sign bit? Let’s start with the easy case, with the sign bit set to 0. The result is exactly as we just saw: the value of the remaining bits is 127, so the number represented is 127.

Now we might be tempted to think that turning on the sign bit would just make the number negative, such that ([1]111 1111)2 (using brackets to denote the sign bit) would be -127. This approach is called signed magnitude, and it is essentially what we do in customary base 10 usage. We don’t normally use this approach inside computer binary representations for several reasons. First, the circuitry to perform arithmetic with signed magnitude values is relatively complicated. Second, we end up with both a positive 0 ([0]000 0000)2 and a negative 0 ([1]000 0000)2.

As we’ll see later when we perform arithmetic, the actual representation we use is called the two’s complement representation.

Positive Integers

In two’s complement representation, we can ignore a sign bit of zero when converting to decimal. Therefore, we use the same procedure as we previously used when counting in binary. Here are a few examples:

([0]000 1010)2 =
0*26 + 0*25 + 0*24 + 1*23 + 0*22 + 1*21 + 0*20 =
0 + 0 + 0 + 8 + 0 + 2 + 0 = 10

([0]101 1100)2 =
1*26 + 0*25 + 1*24 + 1*23 + 1*22 + 0*21 + 0*20 =
64 + 0 + 16 + 8 + 4 + 0 + 0 = 92

Essentially, we can just pretend that the sign bit isn’t there whenever the sign bit is zero.

Negative Integers

With negative integers using two’s complement, the sign bit has the same magnitude as the bit in the same position would have, except that it is negative. Thus, ([1]000 0000)2 equals -128, since the sign bit is in the 27 position, or position 7, counting from the right and starting at zero. If any bits to the right of the sign bit are turned on, we add the value of those bits back to the most negative value created by the sign bit. Let’s look at a few examples:

([1]000 0001)2 =
-1*27 + 0*26 + 0*25 + 0*24 + 0*23 + 0*22 + 0*21 + 1*20 =
-128 + 0 + 0 + 0 + 0 + 0 + 0 + 1 = -127

([1]000 0010)2 =
-1*27 + 0*26 + 0*25 + 0*24 + 0*23 + 0*22 + 1*21 + 0*20 =
-128 + 0 + 0 + 0 + 0 + 0 + 2 + 0 = -126

([1]000 0011)2 =
-1*27 + 0*26 + 0*25 + 0*24 + 0*23 + 0*22 + 1*21 + 0*20 =
-128 + 0 + 0 + 0 + 0 + 0 + 2 + 1 = -125

([1]111 1111)2 =
-1*27 + 1*26 + 1*25 + 1*24 + 1*23 + 1*22 + 1*21 + 1*20 =
-128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = -1

Using the two’s complement approach greatly simplifies the low-level circuitry needed for a computer to perform mathematical operations (such as addition and subtraction). At the same time, there is only one representation for zero, which further eliminates the need for special detection circuits or extra software steps to realize that -0 and +0 are the same value (since zero is neither positive nor negative in math).

An easy way to remember how to do sign bit conversion in two’s complement is to remember that if the sign bit is turned on by itself (and all the rest of the bits are 0), the number is as negative as it can go. When any of the remaining bits are turned on, add the values of those bits to the most negative value to get the result. Since the sum of the values of all the bits to the right of the sign bit is always one less than the magnitude of the sign bit, turning on all the bits including the sign bit is always -1.

Longer Integers

In the above examples, we’ve seen that signed integers with 8 total bits (including the sign bit) can represent values from -128 through +127. Before we had the sign bit, we were using whole numbers, which are called unsigned integers in computing terminology. With unsigned integers, the first bit is just another bit, so the values that can be represented are 0 through +255. Notice that we can always represent 256 distinct values with an 8-bit integer; whether or not the integer is signed simply determines the range of those values (-128 through +127 versus 0 through +255).

Inside the computer, we group collections of 8 bits together into a single byte, and our units of storage are based on groups of bytes. However, we normally still refer to the size of a storage location for numbers using the number of bits. This number will be a multiple of 8.

If we need to represent a number with a greater magnitude (signed or unsigned) than we can fit into a single byte, then we normally double the number of bytes until we get a storage area big enough to hold the value. For example, if we needed to store a value between -32,768 and +32,767 (inclusive), we could use two bytes side-by-side. The result would be a 16-bit integer:

([1]000 0000 0000 0000)2 = -32,768
([0]111 1111 1111 1111)2 = +32,767

Typical sizes of integers we see in computers are 8, 16, 32, and 64 bits. Table 1 shows the ranges of common integer sizes.

Table 1: Common integer sizes.
Size in bits Signed Range Unsigned Range
8 [-128, +127] [0, +255]
16 [-32,768, +32,767] [0, +65,535]
32 [-2,147,483,648, +2,147,483,647] [0, +4,294,967,295]
64 [-9,223,372,036,854,775,808, +9,223,372,036,854,775,807] [0, +18,446,744,073,709,551,615]

Some newer computer systems support integers that are larger than 64 bits. These integers can represent extremely large numbers.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.