Examples exploring the use of classes/type hierarchy to represent statistical classifications
| codelists | 6 years ago | ||
| Makefile | 6 years ago | ||
| README.md | 6 years ago | ||
| codelists-metadata.json | 6 years ago | ||
| columns.csv | 6 years ago | ||
| components.csv | 6 years ago | ||
| owl_classification.py | 6 years ago | ||
| population-british-islands.csv | 6 years ago | ||
| population-british-islands.csv-metadata.json | 6 years ago | ||
| population-british-isles.csv | 6 years ago | ||
| population-british-isles.csv-metadata.json | 6 years ago | ||
| prefixes.ttl | 6 years ago | ||
| skos.rdf | 6 years ago | ||
Some explorations and noodlings on adding more explicit semantics to
statistical classifications as a way to help manage the implications
of relating classifications.
We've been representing statisical classifications in CSV and then
converting to SKOS Concept Schemes by way of table2qb and CSV2RDF. The
RDF Data Cube vocabulary uses SKOS concepts as the values of
dimensions of observations in a data cube. SKOS, by design, doesn't
provide much in the way of semantics, leaving it to an application to
decide what skos:Concepts and relations between them logically mean.
The issue is that these semantics (and their logical implications) are
directly coded into applications, rather than being explicit, separate
logical rules. As such, it's hard to reason about what what the
implications are when we want to relate classifications to each other.
Since a statistical classification divides a statistical population
into subsets, normally
MECE, it makes sense
to model the classification (and hierarchy) as disjoint subsets (of
subsets, etc.). OWL gives us the tools to model with sets and
relations between them and to reason about the consequences of any
restrictions.
By way of example, we've taken two overlapping breakdowns of
geography, the British Isles and the British Islands and created
simple datasets about the populations of the various parts.
This directory contains the following:
population-british-isles.csv contains
the observations as Tidy Data in the style acceptable to table2qb.
population-british-islands.csv
contains the observations as Tidy Data in a simplified style.
population-british-isles.csv-metadata.json
gives the CSVW needed to convert the data into an RDF data cube using
the W3C standard csv2rdf.
population-british-islands.csv-metadata.json is similar, with some changes to cope with the simpler representation.
owl_classification.py takes a typical CSV
file as above, representing a statistical classification, expected to
be MECE, and creates a hierarchy of classes (just sets really) of
disjoint subclasses to represent the classificaion. Each class is
defined "intensionally" as having instances those qb:Observations
whose dimension property has the corresponding SKOS concept as its
value.
usage: owl_classification.py [-h] codelist classification codes property Create statistical classification as OWL positional arguments: codelist Codelist CSV file. classification Base URI for this classification. codes Base URI for the codelist. property Defining property.
codelists/british-islands.csv and
codelists/british-isles.csv provide
separate breakdowns of the two overlapping hierarchies in a table2qb
style.
codelists-metadata.json,
columns.csv and the blank
components.csv are configuration files used by table2qb.
prefixes.ttl is used to make the Turtle files more readable.
skos.rdf is a copy of the SKOS ontology with some small
changes to remove the "lints" that might break reasoners.
Makefile used to record the various steps used to create
the following: