Ontology

A formal, explicit specification of a shared conceptualisation --- defining not just what things exist, but how they relate to each other and what rules govern those relationships.


What is it?

In computer science and AI, an ontology is a structured description of the concepts in a domain and the relationships between them. It goes beyond a simple list or hierarchy --- an ontology defines types of things (classes), the properties those things can have, the relationships between them, and the rules (axioms) that constrain what is possible.1

The term is borrowed from philosophy, where ontology is the study of what exists. In information science, the meaning is more precise: an ontology is a formal, explicit specification of a shared conceptualisation.2 “Formal” means machine-readable. “Explicit” means the assumptions are stated, not implied. “Shared” means a community agrees on it. “Conceptualisation” means it models a specific view of the world.

The critical difference between an ontology and a taxonomy is that a taxonomy captures hierarchy only --- “a Golden Retriever is a type of Dog, which is a type of Mammal.” An ontology captures hierarchy plus relationships and constraints: “a Dog is owned by a Person. A Person lives at an Address. Every Dog must have exactly one breed.”3 The taxonomy tells you what things are. The ontology tells you how things connect and what rules they follow.

Ontologies power some of the most important systems in computing: the Semantic Web uses ontologies (expressed in OWL and RDF Schema) to let machines understand web content.1 Medical systems use ontologies like SNOMED CT to ensure that “heart attack” and “myocardial infarction” are understood as the same condition. And increasingly, AI systems grounded in ontologies hallucinate less because the ontology constrains what the system can assert --- it cannot generate a statement that violates the defined relationships.4

In plain terms

An ontology is like the rulebook for a board game. The rulebook defines the types of pieces (classes), what each piece can do (properties), how pieces interact (relationships), and what moves are illegal (constraints). Without the rulebook, you just have a collection of tokens on a board. With it, meaningful play becomes possible.


At a glance


How does it work?

An ontology is built from four interrelated components. Each adds a layer of expressiveness that simpler structures like lists and taxonomies lack.

1. Classes (types of things)

A class defines a category or type. In an ontology about geography, the classes might include Country, City, River, and Mountain. Classes can be arranged in hierarchies (a Capital is a subclass of City), but unlike a taxonomy, the hierarchy is just one of several structural dimensions.1

Think of it like...

Classes are like the moulds in a factory. Each mould defines a type of product that can be manufactured. The mould for “Cup” defines the general shape --- specific cups (ceramic, plastic, large, small) are instances produced from it.

2. Properties (attributes and relationships)

Properties describe what a class can have or how classes relate to each other. There are two kinds:1

  • Data properties link an individual to a value: Person --[birthYear]--> 1867
  • Object properties link two individuals: Marie Curie --[won]--> Nobel Prize

Properties are where ontologies gain their power over taxonomies. A taxonomy can tell you that Marie Curie is a Scientist. An ontology can also tell you that she won two awards, worked at the University of Paris, and discovered Polonium --- and that each of these is a defined, typed relationship.

Concept to explore

See semantic-triples for how properties combine with subjects and objects to form the atomic statements of an ontology.

3. Individuals (specific instances)

An individual (also called an instance) is a concrete member of a class. Marie Curie is an individual of the class Scientist. Paris is an individual of the class City. Individuals carry actual data and participate in the relationships defined by properties.1

4. Axioms (rules and constraints)

Axioms are logical statements that define what is and is not possible in the domain. They are what make an ontology formal rather than merely descriptive.2

For example: “Every Student is enrolled in at least one Course.” “No individual can be both a Student and a Professor.” “The teaches relationship only holds between a Professor and a Course.” These constraints allow machines to reason --- to infer new facts and detect contradictions automatically.

Think of it like...

Axioms are the grammar rules of a language. Vocabulary (classes and properties) tells you what words exist. Grammar tells you how they can be combined. “The cat sat on the mat” is grammatical. “The mat sat on the cat” may be grammatically valid but semantically odd --- axioms in an ontology can enforce semantic validity too.

Concept to explore

Description logics provide the mathematical foundation for ontology axioms. They define precisely what can be expressed and what can be computed.


Why do we use it?

Key reasons

1. Shared understanding across systems and teams. An ontology provides a common vocabulary and set of rules that multiple systems, organisations, or research groups can agree on. When two hospital systems use the same medical ontology, they can exchange patient data without translation errors.3

2. Machine reasoning. Because axioms are formal logic, machines can infer facts that are not explicitly stated. If the ontology says “every Mammal is an Animal” and “a Dog is a Mammal,” the system can infer that a Dog is an Animal without being told directly. This is automated reasoning, and it scales to millions of facts.1

3. Reducing AI hallucination. When an AI system is grounded in an ontology, it is constrained by the defined relationships and rules. It cannot assert that “Paris is the capital of Germany” if the ontology specifies that Paris is-capital-of France and that is-capital-of is a functional property (each city is capital of at most one country).4

4. Enabling interoperability. Ontologies are the backbone of the Semantic Web vision --- making data on the web machine-understandable so that systems built by different teams can work together without custom integration.1


When do we use it?

  • When building a knowledge graph that needs more structure than a taxonomy provides
  • When multiple systems or teams need to share a common data model with defined semantics
  • When you need a machine to reason about relationships --- not just retrieve data, but draw inferences
  • When designing an AI system that must be constrained to only make valid assertions
  • When working with linked data on the web, where different datasets must interoperate

Rule of thumb

If you need to say “X is a type of Y” --- use a taxonomy. If you also need to say “X relates to Z in this specific way, and here are the rules governing that relationship” --- use an ontology.


How can I think about it?

The building code analogy

A taxonomy of buildings might classify them as Residential > House > Detached House. An ontology is like the building code for the city. It defines the types of structures (classes: house, apartment, office), the relationships between them (a house is on a lot, a lot is in a zone), the properties they must have (every house must have an address, a number of bedrooms, a year built), and the constraints (a residential zone cannot contain an industrial building, every building must have exactly one postal code).

  • Classes = the categories of buildings
  • Properties = the measurable attributes and connections
  • Individuals = specific buildings at specific addresses
  • Axioms = the zoning laws and building codes that constrain what is valid

Without the building code, you have a list of buildings. With it, you can reason: “Is this proposed factory legal on this lot?” The ontology makes that question answerable.

The human schema analogy

Cognitive scientists describe how people understand the world through schemas --- internal mental models that define types, relationships, and expectations. Your “restaurant schema” includes classes (waiter, menu, table, bill), relationships (a waiter serves a table, a table has a menu), and constraints (you pay after eating, not before).2

An ontology is an externalised, machine-readable version of a human schema. When you walk into a restaurant in a foreign country, your schema lets you navigate the situation even without speaking the language. When a machine encounters a new dataset, an ontology lets it interpret the data even without bespoke programming.

  • Schema = what you carry in your head
  • Ontology = the same structure, written down formally so a machine can use it

Concepts to explore next

ConceptWhat it coversStatus
semantic-triplesThe subject-predicate-object atoms that ontology statements are built fromcomplete
description-logicsThe mathematical foundations for ontology reasoningstub
owlThe Web Ontology Language --- the standard for expressing ontologies on the webstub
rdf-schemaThe schema layer for RDF that defines classes and propertiesstub

Some cards don't exist yet

A broken link is a placeholder for future learning, not an error.


Check your understanding


Where this concept fits

Position in the knowledge graph

graph TD
    KE[Knowledge Engineering] --> O[Ontology]
    KE --> KG[Knowledge Graphs]
    KE --> MRF[Machine-Readable Formats]
    O --> ST[Semantic Triples]
    style O fill:#4a9ede,color:#fff

Related concepts:

  • nodes-and-edges --- ontology classes and individuals map to nodes; properties map to edges in a graph
  • structured-data-vs-prose --- ontologies represent the most structured end of the spectrum, where every relationship is formally defined
  • schema-theory --- human cognitive schemas are the mental parallel to formal ontologies

Sources


Further reading

Resources

Footnotes

  1. Wikipedia contributors. (n.d.). Ontology (information science). Wikipedia. 2 3 4 5 6 7

  2. Knowledge Systems Authority. (2026). Knowledge Ontologies and Taxonomies. Knowledge Systems Authority. 2 3

  3. Sowa, J. (2000). What Is Knowledge Representation?. In Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks/Cole. 2

  4. Dong, Y. et al. (2025). Ontology-Grounded AI Reduces Hallucination in Knowledge-Intensive Tasks. PubMed. 2