Skip to content

Plugins

rdflib plugin "architecture"

Many parts of RDFLib are extensible with plugins, see setuptools’ ‘Creating and discovering plugins’. These pages list the plugins included in RDFLib core.

Plugin stores

Built In

The following Stores are contained within the rdflib core package:

Name Class
Auditable AuditableStore
Concurrent ConcurrentStore
SimpleMemory SimpleMemory
Memory Memory
SPARQLStore SPARQLStore
SPARQLUpdateStore SPARQLUpdateStore
BerkeleyDB BerkeleyDB
default Memory

External

The following Stores are defined externally to rdflib’s core package, so look to their documentation elsewhere for specific details of use.

Name Repository Notes
SQLAlchemy github.com/RDFLib/rdflib-sqlalchemy An SQLAlchemy-backed, formula-aware RDFLib Store. Tested dialects are: SQLite, MySQL & PostgreSQL
leveldb github.com/RDFLib/rdflib-leveldb An adaptation of RDFLib BerkeleyDB Store’s key-value approach, using LevelDB as a back-end
Kyoto Cabinet github.com/RDFLib/rdflib-kyotocabinet An adaptation of RDFLib BerkeleyDB Store’s key-value approach, using Kyoto Cabinet as a back-end
HDT github.com/RDFLib/rdflib-hdt A Store back-end for rdflib to allow for reading and querying HDT documents
Oxigraph github.com/oxigraph/oxrdflib Works with the Pyoxigraph Python graph database library
pycottas github.com/arenas-guerrero-julian/pycottas A Store backend for querying compressed COTTAS files

If you have, or know of a Store implementation and would like it listed here, please submit a Pull Request!

Use

You can use these stores like this:

from rdflib import Graph

# use the default memory Store
graph = Graph()

# use the BerkeleyDB Store
graph = Graph(store="BerkeleyDB")

In some cases, you must explicitly open and close a store, for example:

from rdflib import Graph

# use the BerkeleyDB Store
graph = Graph(store="BerkeleyDB")
graph.open("/some/folder/location")
# do things ...
graph.close()

Plugin parsers

These serializers are available in default RDFLib, you can use them by passing the name to graph’s parse() method:

graph.parse(my_url, format='n3')

The html parser will auto-detect RDFa, HTurtle or Microdata.

It is also possible to pass a mime-type for the format parameter:

graph.parse(my_url, format='application/rdf+xml')

If you are not sure what format your file will be, you can use guess_format() which will guess based on the file extension.

Name Class
json-ld JsonLDParser
hext HextuplesParser
n3 N3Parser
nquads NQuadsParser
patch RDFPatchParser
nt NTParser
trix TriXParser
turtle TurtleParser
xml RDFXMLParser

Multi-graph IDs

Note that for correct parsing of multi-graph data, e.g. TriG, HexTuple, etc., into a Dataset, as opposed to a context-unaware Graph, you will need to set the publicID of the Dataset to the identifier of the default_context (default graph), for example:

d = Dataset()
d.parse(
    data=""" ... """,
    format="trig",
    publicID=d.default_context.identifier
)

(from the file tests/test_serializer_hext.py)

Plugin serializers

These serializers are available in default RDFLib, you can use them by passing the name to a graph’s serialize() method:

print graph.serialize(format='n3')

It is also possible to pass a mime-type for the format parameter:

graph.serialize(my_url, format='application/rdf+xml')
Name Class
json-ld JsonLDSerializer
n3 N3Serializer
nquads NQuadsSerializer
nt NTSerializer
hext HextuplesSerializer
patch PatchSerializer
pretty-xml PrettyXMLSerializer
trig TrigSerializer
trix TriXSerializer
turtle TurtleSerializer
longturtle LongTurtleSerializer
xml XMLSerializer

JSON-LD

JSON-LD - ‘json-ld’ - has been incorporated into RDFLib since v6.0.0.

RDF Patch

The RDF Patch Serializer - ‘patch’ - uses the RDF Patch format defined at https://afs.github.io/rdf-patch/. It supports serializing context aware stores as either addition or deletion patches; and also supports serializing the difference between two context aware stores as a Patch of additions and deletions.

HexTuples

The HexTuples Serializer - ‘hext’ - uses the HexTuples format defined at https://github.com/ontola/hextuples.

For serialization of non-context-aware data sources, e.g. a single Graph, the ‘graph’ field (6th variable in the Hextuple) will be an empty string.

For context-aware (multi-graph) serialization, the ‘graph’ field of the default graph will be an empty string and the values for other graphs will be Blank Node IDs or IRIs.

Longturtle

Longturtle is just the turtle format with newlines preferred over compactness - multiple nodes on the same line to enhance the format’s text file version control (think Git) friendliness - and more modern forms of prefix markers - PREFIX instead of @prefix - to make it as similar to SPARQL as possible.

Longturtle is Turtle 1.1 compliant and will work wherever ordinary turtle works, however some very old parsers don’t understand PREFIX, only @prefix…

Plugin query results

Plugins for reading and writing of (SPARQL) Result - pass name to either parse() or serialize()

Parsers

Name Class
csv CSVResultParser
json JSONResultParser
tsv TSVResultParser
xml XMLResultParser

Serializers

Name Class
csv CSVResultSerializer
json JSONResultSerializer
txt TXTResultSerializer
xml XMLResultSerializer