Binary
A number system with only two digits — 0 and 1 — that forms the foundation of all digital computing because it maps directly to the two states of an electrical switch: off and on.
What is it?
You count in base 10 — ten digits, 0 through 9. When you run out, you add a column: 10, 11, 12. Binary works the same way, but with only two digits: 0 and 1. When you run out, you add a column: 10 (which means “two” in binary), 11 (three), 100 (four).
Why would anyone use a number system with only two digits? Because computers are built from transistors — tiny electronic switches that are either on or off. There is no “half on.” There is no dial. Just two states. Binary maps perfectly to this physical reality: 0 = off, 1 = on.1
Every piece of data a computer processes — numbers, text, images, video, sound — is stored and manipulated as patterns of 0s and 1s. A single binary digit is called a bit. Eight bits make a byte, which can represent 256 different values (2^8). That is enough to encode any character in the English alphabet, any digit, and common symbols.2
In plain terms
Binary is the alphabet of electricity. Just as English uses 26 letters to build every word, computers use 2 digits to build every number, every letter, every pixel, and every sound. The simplicity is the strength — two states are easy to engineer reliably at billions of operations per second.
At a glance
How binary maps to data (click to expand)
graph TD T[Transistors<br/>on or off] -->|represent| B[Bits<br/>0 or 1] B -->|8 bits =| BY[Byte<br/>256 values] BY -->|encode| N[Numbers] BY -->|encode| TX[Text] BY -->|encode| IMG[Images] BY -->|encode| SND[Sound] style B fill:#4a9ede,color:#fff style BY fill:#e8b84b,color:#fffKey: Transistors produce bits. Bits group into bytes. Bytes encode all forms of data through agreed-upon encoding schemes.
How does it work?
Counting in binary
Each position in a binary number represents a power of 2, just as each position in decimal represents a power of 10:
| Binary | Calculation | Decimal |
|---|---|---|
1 | 1 | 1 |
10 | 2 + 0 | 2 |
11 | 2 + 1 | 3 |
100 | 4 + 0 + 0 | 4 |
1000 | 8 + 0 + 0 + 0 | 8 |
1010 | 8 + 0 + 2 + 0 | 10 |
11111111 | 128+64+32+16+8+4+2+1 | 255 |
Think of it like...
Light switches in a row. Each switch is either up (1) or down (0). The first switch is worth 1, the second 2, the fourth 4, the eighth 8. To represent “10,” flip the switches worth 8 and 2:
1010.
Encoding text
To store text, each character is assigned a number. The letter “A” is 65. The letter “a” is 97. A space is 32. The original encoding — ASCII — used 7 bits to represent 128 characters, covering English letters, digits, and punctuation.3
Modern computing uses Unicode, which extends this to over 150,000 characters covering every writing system on Earth, plus emoji. Unicode characters use 8 to 32 bits depending on the encoding (UTF-8, UTF-16).4
Encoding images and sound
An image is a grid of pixels. Each pixel is a colour. Each colour is a combination of red, green, and blue intensity — each represented by a byte (0-255). A single pixel in full colour needs 3 bytes (24 bits).5
Sound is encoded by sampling the audio wave thousands of times per second and storing each sample as a number. CD-quality audio samples 44,100 times per second, with each sample stored as a 16-bit number.6
How "Hello" becomes binary
Character ASCII code Binary H 72 01001000e 101 01100101l 108 01101100l 108 01101100o 111 01101111The word “Hello” in memory:
01001000 01100101 01101100 01101100 01101111— five bytes, forty bits.
Logic gates
Binary is not just for storage. The CPU performs calculations using logic gates — circuits that apply Boolean logic to binary inputs. An AND gate outputs 1 only when both inputs are 1. An OR gate outputs 1 when either input is 1. A NOT gate flips the input.7
These three operations, combined in billions of configurations, produce everything a computer can do — from addition to video rendering to running language models.
Why do we use it?
Key reasons
1. Physical simplicity. Two states (on/off) are trivial to engineer reliably. Building circuits that distinguish between 10 different voltage levels would be far more error-prone.
2. Noise resistance. With only two states, the system can tolerate significant electrical noise before misreading a signal. A “1” is anything above a threshold; a “0” is anything below.
3. Mathematical elegance. Boolean algebra — the mathematics of true/false logic — maps directly to binary and provides a complete foundation for computation.8
When do we use it?
- Every time a computer does anything — binary is the substrate, whether you see it or not
- When debugging low-level issues (network packets, file formats, memory layout)
- When understanding why computers have specific limits (256 values per byte, 4 billion values per 32-bit integer)
- When working with colour codes, character encodings, or file headers
Rule of thumb
You rarely need to read binary directly. But understanding that everything reduces to bits helps you grasp why computers have the constraints and capabilities they do.
How can I think about it?
The light switch row
Imagine a row of light switches on a wall. Each switch is either up (1) or down (0). The first switch from the right is worth 1. The next is worth 2. Then 4, 8, 16, 32, and so on — each double the previous.
To represent any number, you flip the right combination of switches. The number 13? Flip the switches worth 8, 4, and 1:
1101. The number 255? All eight switches up:11111111.This is exactly how a byte works in a computer — eight transistors, each on or off, representing a number from 0 to 255.
The Morse code parallel
Binary is to computers what Morse code is to telegraph operators. Morse uses two signals — dot and dash — to encode every letter. Binary uses two digits — 0 and 1 — to encode every piece of data.
The principle is the same: with just two symbols and agreed-upon patterns, you can represent anything. The encoding scheme (ASCII, Unicode, RGB) is the “codebook” that says which pattern means what — just as Morse code has a codebook mapping dots and dashes to letters.
Concepts to explore next
| Concept | What it covers | Status |
|---|---|---|
| machine-code | Binary patterns the CPU reads as instructions | complete |
| abstraction-layers | Why we built layers above binary to make computing accessible | complete |
Some cards don't exist yet
A broken link is a placeholder for future learning, not an error.
Check your understanding
Test yourself (click to expand)
- Explain why computers use binary instead of decimal. What physical property makes two states ideal?
- Describe how the word “Hi” would be encoded in binary using ASCII (H=72, i=105).
- Distinguish between a bit and a byte. How many different values can each represent?
- Interpret this: a digital photo is 4000 x 3000 pixels, each pixel using 3 bytes. How many bytes does the image require before compression?
- Connect binary to abstraction-layers: why don’t programmers write in binary today?
Where this concept fits
Position in the knowledge graph
graph TD SD[Software Development] --> BIN[Binary] SD --> AL[Abstraction Layers] BIN --> MC[Machine Code] BIN -.->|foundation for| AL style BIN fill:#4a9ede,color:#fffRelated concepts:
- machine-code — binary patterns that the CPU interprets as instructions to execute
- abstraction-layers — the layers built above binary to make computing accessible to humans
Sources
Further reading
Resources
- Code: The Hidden Language — Charles Petzold builds from first principles to a working computer, starting with nothing but flashlights and Morse code
- Binary and Data Representation — Khan Academy’s visual introduction to binary, encoding, and data representation
- Crash Course Computer Science: Binary — 10-minute video covering binary counting, logic gates, and how they build up to computation
Footnotes
-
Petzold, C. (2000). Code: The Hidden Language of Computer Hardware and Software. Microsoft Press. ↩
-
Hennessy, J. & Patterson, D. (2017). Computer Architecture: A Quantitative Approach. 6th ed. Morgan Kaufmann. ↩
-
Wikipedia. (2026). ASCII. The American Standard Code for Information Interchange, first published in 1963. ↩
-
Unicode Consortium. (2026). The Unicode Standard. Unicode Consortium. ↩
-
RGB colour model uses 8 bits per channel (red, green, blue), yielding 16.7 million possible colours per pixel. Wikipedia. (2026). Color depth. ↩
-
Pohlmann, K. (2010). Principles of Digital Audio. 6th ed. McGraw-Hill. ↩
-
Boole, G. (1854). An Investigation of the Laws of Thought. Walton and Maberly. ↩
-
Shannon, C. (1938). “A Symbolic Analysis of Relay and Switching Circuits.” Transactions of the American Institute of Electrical Engineers, 57(12). Shannon’s master’s thesis proved that Boolean algebra could be used to design digital circuits. ↩
