Binary

A number system with only two digits — 0 and 1 — that forms the foundation of all digital computing because it maps directly to the two states of an electrical switch: off and on.


What is it?

You count in base 10 — ten digits, 0 through 9. When you run out, you add a column: 10, 11, 12. Binary works the same way, but with only two digits: 0 and 1. When you run out, you add a column: 10 (which means “two” in binary), 11 (three), 100 (four).

Why would anyone use a number system with only two digits? Because computers are built from transistors — tiny electronic switches that are either on or off. There is no “half on.” There is no dial. Just two states. Binary maps perfectly to this physical reality: 0 = off, 1 = on.1

Every piece of data a computer processes — numbers, text, images, video, sound — is stored and manipulated as patterns of 0s and 1s. A single binary digit is called a bit. Eight bits make a byte, which can represent 256 different values (2^8). That is enough to encode any character in the English alphabet, any digit, and common symbols.2

In plain terms

Binary is the alphabet of electricity. Just as English uses 26 letters to build every word, computers use 2 digits to build every number, every letter, every pixel, and every sound. The simplicity is the strength — two states are easy to engineer reliably at billions of operations per second.


At a glance


How does it work?

Counting in binary

Each position in a binary number represents a power of 2, just as each position in decimal represents a power of 10:

BinaryCalculationDecimal
111
102 + 02
112 + 13
1004 + 0 + 04
10008 + 0 + 0 + 08
10108 + 0 + 2 + 010
11111111128+64+32+16+8+4+2+1255

Think of it like...

Light switches in a row. Each switch is either up (1) or down (0). The first switch is worth 1, the second 2, the fourth 4, the eighth 8. To represent “10,” flip the switches worth 8 and 2: 1010.

Encoding text

To store text, each character is assigned a number. The letter “A” is 65. The letter “a” is 97. A space is 32. The original encoding — ASCII — used 7 bits to represent 128 characters, covering English letters, digits, and punctuation.3

Modern computing uses Unicode, which extends this to over 150,000 characters covering every writing system on Earth, plus emoji. Unicode characters use 8 to 32 bits depending on the encoding (UTF-8, UTF-16).4

Encoding images and sound

An image is a grid of pixels. Each pixel is a colour. Each colour is a combination of red, green, and blue intensity — each represented by a byte (0-255). A single pixel in full colour needs 3 bytes (24 bits).5

Sound is encoded by sampling the audio wave thousands of times per second and storing each sample as a number. CD-quality audio samples 44,100 times per second, with each sample stored as a 16-bit number.6

Logic gates

Binary is not just for storage. The CPU performs calculations using logic gates — circuits that apply Boolean logic to binary inputs. An AND gate outputs 1 only when both inputs are 1. An OR gate outputs 1 when either input is 1. A NOT gate flips the input.7

These three operations, combined in billions of configurations, produce everything a computer can do — from addition to video rendering to running language models.


Why do we use it?

Key reasons

1. Physical simplicity. Two states (on/off) are trivial to engineer reliably. Building circuits that distinguish between 10 different voltage levels would be far more error-prone.

2. Noise resistance. With only two states, the system can tolerate significant electrical noise before misreading a signal. A “1” is anything above a threshold; a “0” is anything below.

3. Mathematical elegance. Boolean algebra — the mathematics of true/false logic — maps directly to binary and provides a complete foundation for computation.8


When do we use it?

  • Every time a computer does anything — binary is the substrate, whether you see it or not
  • When debugging low-level issues (network packets, file formats, memory layout)
  • When understanding why computers have specific limits (256 values per byte, 4 billion values per 32-bit integer)
  • When working with colour codes, character encodings, or file headers

Rule of thumb

You rarely need to read binary directly. But understanding that everything reduces to bits helps you grasp why computers have the constraints and capabilities they do.


How can I think about it?

The light switch row

Imagine a row of light switches on a wall. Each switch is either up (1) or down (0). The first switch from the right is worth 1. The next is worth 2. Then 4, 8, 16, 32, and so on — each double the previous.

To represent any number, you flip the right combination of switches. The number 13? Flip the switches worth 8, 4, and 1: 1101. The number 255? All eight switches up: 11111111.

This is exactly how a byte works in a computer — eight transistors, each on or off, representing a number from 0 to 255.

The Morse code parallel

Binary is to computers what Morse code is to telegraph operators. Morse uses two signals — dot and dash — to encode every letter. Binary uses two digits — 0 and 1 — to encode every piece of data.

The principle is the same: with just two symbols and agreed-upon patterns, you can represent anything. The encoding scheme (ASCII, Unicode, RGB) is the “codebook” that says which pattern means what — just as Morse code has a codebook mapping dots and dashes to letters.


Concepts to explore next

ConceptWhat it coversStatus
machine-codeBinary patterns the CPU reads as instructionscomplete
abstraction-layersWhy we built layers above binary to make computing accessiblecomplete

Some cards don't exist yet

A broken link is a placeholder for future learning, not an error.


Check your understanding


Where this concept fits

Position in the knowledge graph

graph TD
    SD[Software Development] --> BIN[Binary]
    SD --> AL[Abstraction Layers]
    BIN --> MC[Machine Code]
    BIN -.->|foundation for| AL

    style BIN fill:#4a9ede,color:#fff

Related concepts:

  • machine-code — binary patterns that the CPU interprets as instructions to execute
  • abstraction-layers — the layers built above binary to make computing accessible to humans

Sources


Further reading

Resources

Footnotes

  1. Petzold, C. (2000). Code: The Hidden Language of Computer Hardware and Software. Microsoft Press.

  2. Hennessy, J. & Patterson, D. (2017). Computer Architecture: A Quantitative Approach. 6th ed. Morgan Kaufmann.

  3. Wikipedia. (2026). ASCII. The American Standard Code for Information Interchange, first published in 1963.

  4. Unicode Consortium. (2026). The Unicode Standard. Unicode Consortium.

  5. RGB colour model uses 8 bits per channel (red, green, blue), yielding 16.7 million possible colours per pixel. Wikipedia. (2026). Color depth.

  6. Pohlmann, K. (2010). Principles of Digital Audio. 6th ed. McGraw-Hill.

  7. Boole, G. (1854). An Investigation of the Laws of Thought. Walton and Maberly.

  8. Shannon, C. (1938). “A Symbolic Analysis of Relay and Switching Circuits.” Transactions of the American Institute of Electrical Engineers, 57(12). Shannon’s master’s thesis proved that Boolean algebra could be used to design digital circuits.