Computational Logic
Computers are really giant sets of switches that can be in one of two states: off (0) or on (1). Since the early 20th century, we have created computer systems using ever-improving types of switches, from electromechanical relays, to vacuum tubes, to transistors. These advances have made the resulting computer systems more powerful and more reliable, but one thing that has remained unchanged since the days of punched paper tape is that we represent information by encoding it into sets of switches.
The foundation of any computer system is the logic gate, which consists of special circuitry that implements logical operations on our combinations of switches. At a low level, these gates are the basic units upon which more complex circuitry inside the computer are built. Common gates, such as NOT, AND, OR, XOR, NAND, NOR, and XNOR, form the foundations of logical circuits. We can trace the operation of circuits built from these gates using truth tables. When we need to simplify circuits to save money, reduce electrical consumption, and improve performance, we can use principles like De Morgan’s Laws and techniques like Karnaugh maps to assist us. By recognizing logical tautologies and fallacies, we can avoid creating unnecessary circuits.
We do not normally think of our computers as big collections of switches or logic gates, nor do we stop to think about how each gate inside the computer is operating while we surf the Internet. Instead, we think of the computer as a machine in its own right. We even think of the Internet as a separate thing in its own right, even though it really just consists of a bunch of computers wired together, and each of those computers, in turn, contains a giant collection of gates and an even larger pile of switches. Our ability to zoom in and out, so that we can see things as individual components or as a single system, is part of our human ability of abstraction.