Skip to content

sparqlstore

This is an RDFLib store around Ivan Herman et al.’s SPARQL service wrapper. This was first done in layer-cake, and then ported to RDFLib

Classes:

  • SPARQLStore

    An RDFLib store around a SPARQL endpoint

  • SPARQLUpdateStore

    A store using SPARQL queries for reading and SPARQL Update for changes.

BNODE_IDENT_PATTERN module-attribute

BNODE_IDENT_PATTERN = compile('(?P<label>_\\:[^\\s]+)')

LIMIT module-attribute

LIMIT = 'LIMIT'

OFFSET module-attribute

OFFSET = 'OFFSET'

ORDERBY module-attribute

ORDERBY = 'ORDER BY'

__all__ module-attribute

__all__ = ['SPARQLUpdateStore', 'SPARQLStore']

SPARQLStore

SPARQLStore(query_endpoint: Optional[str] = None, sparql11: bool = True, context_aware: bool = True, node_to_sparql: _NodeToSparql = _node_to_sparql, returnFormat: Optional[str] = 'xml', auth: Optional[Tuple[str, str]] = None, **sparqlconnector_kwargs)

Bases: SPARQLConnector, Store

An RDFLib store around a SPARQL endpoint

This is context-aware and should work as expected when a context is specified.

For ConjunctiveGraphs, reading is done from the “default graph”. Exactly what this means depends on your endpoint, because SPARQL does not offer a simple way to query the union of all graphs as it would be expected for a ConjuntiveGraph. This is why we recommend using Dataset instead, which is motivated by the SPARQL 1.1.

Fuseki/TDB has a flag for specifying that the default graph is the union of all graphs (tdb:unionDefaultGraph in the Fuseki config).

!!! warning “Blank nodes

By default the SPARQL Store does not support blank-nodes!

As blank-nodes act as variables in SPARQL queries,
there is no way to query for a particular blank node without
using non-standard SPARQL extensions.

See http://www.w3.org/TR/sparql11-query/#BGPsparqlBNodes

You can make use of such extensions through the node_to_sparql argument. For example if you want to transform BNode(‘0001’) into ““, you can use a function like this:

>>> def my_bnode_ext(node):
...    if isinstance(node, BNode):
...        return '<bnode:b%s>' % node
...    return _node_to_sparql(node)
>>> store = SPARQLStore('http://dbpedia.org/sparql',
...                     node_to_sparql=my_bnode_ext)

You can request a particular result serialization with the returnFormat parameter. This is a string that must have a matching plugin registered. Built in is support for xml, json, csv, tsv and application/rdf+xml.

The underlying SPARQLConnector uses the urllib library. Any extra kwargs passed to the SPARQLStore connector are passed to urllib when doing HTTP calls. I.e. you have full control of cookies/auth/headers.

Form example:

>>> store = SPARQLStore('...my endpoint ...', auth=('user','pass'))

will use HTTP basic auth.

Methods:

Attributes:

Source code in rdflib/plugins/stores/sparqlstore.py
def __init__(
    self,
    query_endpoint: Optional[str] = None,
    sparql11: bool = True,
    context_aware: bool = True,
    node_to_sparql: _NodeToSparql = _node_to_sparql,
    returnFormat: Optional[str] = "xml",  # noqa: N803
    auth: Optional[Tuple[str, str]] = None,
    **sparqlconnector_kwargs,
):
    super(SPARQLStore, self).__init__(
        query_endpoint=query_endpoint,
        returnFormat=returnFormat,
        auth=auth,
        **sparqlconnector_kwargs,
    )

    self.node_to_sparql = node_to_sparql
    self.nsBindings: Dict[str, Any] = {}
    self.sparql11 = sparql11
    self.context_aware = context_aware
    self.graph_aware = context_aware
    self._queries = 0

context_aware instance-attribute

context_aware = context_aware

formula_aware class-attribute instance-attribute

formula_aware = False

graph_aware class-attribute instance-attribute

graph_aware = context_aware

node_to_sparql instance-attribute

node_to_sparql = node_to_sparql

nsBindings instance-attribute

nsBindings: Dict[str, Any] = {}

regex_matching class-attribute instance-attribute

regex_matching = NATIVE_REGEX

sparql11 instance-attribute

sparql11 = sparql11

transaction_aware class-attribute instance-attribute

transaction_aware = False

__len__

__len__(context: Optional[_ContextType] = None) -> int
Source code in rdflib/plugins/stores/sparqlstore.py
def __len__(self, context: Optional[_ContextType] = None) -> int:
    if not self.sparql11:
        raise NotImplementedError(
            "For performance reasons, this is not"
            + "supported for sparql1.0 endpoints"
        )
    else:
        q = "SELECT (count(*) as ?c) WHERE {?s ?p ?o .}"

        result = self._query(
            q,
            # type error: Item "None" of "Optional[Graph]" has no attribute "identifier"
            default_graph=(
                context.identifier  # type: ignore[union-attr]
                if self._is_contextual(context)
                else None
            ),
        )
        # type error: Item "Tuple[Node, ...]" of "Union[Tuple[Node, Node, Node], bool, ResultRow]" has no attribute "c"
        return int(next(iter(result)).c)  # type: ignore[union-attr]

add

add(_: _TripleType, context: _ContextType = None, quoted: bool = False) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def add(
    self, _: _TripleType, context: _ContextType = None, quoted: bool = False
) -> None:
    raise TypeError("The SPARQL store is read only")

addN

addN(quads: Iterable[_QuadType]) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def addN(self, quads: Iterable[_QuadType]) -> None:  # noqa: N802
    raise TypeError("The SPARQL store is read only")

add_graph

add_graph(graph: Graph) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def add_graph(self, graph: Graph) -> None:
    raise TypeError("The SPARQL store is read only")

bind

bind(prefix: str, namespace: URIRef, override: bool = True) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def bind(self, prefix: str, namespace: URIRef, override: bool = True) -> None:
    bound_prefix = self.prefix(namespace)
    if override and bound_prefix:
        del self.nsBindings[bound_prefix]
    self.nsBindings[prefix] = namespace

commit

commit() -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def commit(self) -> None:
    raise TypeError("The SPARQL store is read only")

contexts

contexts(triple: Optional[_TripleType] = None) -> Generator[_ContextIdentifierType, None, None]

Iterates over results to SELECT ?NAME { GRAPH ?NAME { ?s ?p ?o } } or SELECT ?NAME { GRAPH ?NAME {} } if triple is None.

Returns instances of this store with the SPARQL wrapper object updated via addNamedGraph(?NAME).

This causes a named-graph-uri key / value pair to be sent over the protocol.

Please note that some SPARQL endpoints are not able to find empty named graphs.

Source code in rdflib/plugins/stores/sparqlstore.py
def contexts(  # type: ignore[override]
    self, triple: Optional[_TripleType] = None
) -> Generator[_ContextIdentifierType, None, None]:
    """
    Iterates over results to `SELECT ?NAME { GRAPH ?NAME { ?s ?p ?o } }`
    or `SELECT ?NAME { GRAPH ?NAME {} }` if triple is `None`.

    Returns instances of this store with the SPARQL wrapper
    object updated via addNamedGraph(?NAME).

    This causes a named-graph-uri key / value  pair to be sent over
    the protocol.

    Please note that some SPARQL endpoints are not able to find empty named
    graphs.
    """

    if triple:
        nts = self.node_to_sparql
        s, p, o = triple
        params = (
            nts(s if s else Variable("s")),
            nts(p if p else Variable("p")),
            nts(o if o else Variable("o")),
        )
        q = "SELECT ?name WHERE { GRAPH ?name { %s %s %s }}" % params
    else:
        q = "SELECT ?name WHERE { GRAPH ?name {} }"

    result = self._query(q)
    # type error: Item "bool" of "Union[Tuple[Node, Node, Node], bool, ResultRow]" has no attribute "name"
    # error: Generator has incompatible item type "Union[Any, Identifier]"; expected "IdentifiedNode"
    return (row.name for row in result)  # type: ignore[union-attr,misc]

create

create(configuration: str) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def create(self, configuration: str) -> None:
    raise TypeError(
        "The SPARQL Store is read only. Try SPARQLUpdateStore for read/write."
    )

destroy

destroy(configuration: str) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def destroy(self, configuration: str) -> None:
    raise TypeError("The SPARQL store is read only")

namespace

namespace(prefix: str) -> Optional[URIRef]
Source code in rdflib/plugins/stores/sparqlstore.py
def namespace(self, prefix: str) -> Optional[URIRef]:
    return self.nsBindings.get(prefix)

namespaces

namespaces() -> Iterator[Tuple[str, URIRef]]
Source code in rdflib/plugins/stores/sparqlstore.py
def namespaces(self) -> Iterator[Tuple[str, URIRef]]:
    for prefix, ns in self.nsBindings.items():
        yield prefix, ns

objects

objects(subject: Optional[_SubjectType] = None, predicate: Optional[_PredicateType] = None) -> Generator[_ObjectType, None, None]

A generator of objects with the given subject and predicate

Source code in rdflib/plugins/stores/sparqlstore.py
def objects(
    self,
    subject: Optional[_SubjectType] = None,
    predicate: Optional[_PredicateType] = None,
) -> Generator[_ObjectType, None, None]:
    """A generator of objects with the given subject and predicate"""
    for t, c in self.triples((subject, predicate, None)):
        yield t[2]

open

open(configuration: Union[str, tuple[str, str]], create: bool = False) -> Optional[int]

This method is included so that calls to this Store via Graph, e.g. Graph(“SPARQLStore”), can set the required parameters

Source code in rdflib/plugins/stores/sparqlstore.py
def open(self, configuration: Union[str, tuple[str, str]], create: bool = False) -> Optional[int]:  # type: ignore[return]
    """This method is included so that calls to this Store via Graph, e.g. Graph("SPARQLStore"),
    can set the required parameters
    """
    if type(configuration) is str:
        self.query_endpoint = configuration
    else:
        raise Exception(
            "configuration must be a string (a single query endpoint URI)"
        )

predicate_objects

predicate_objects(subject: Optional[_SubjectType] = None) -> Generator[Tuple[_PredicateType, _ObjectType], None, None]

A generator of (predicate, object) tuples for the given subject

Source code in rdflib/plugins/stores/sparqlstore.py
def predicate_objects(
    self, subject: Optional[_SubjectType] = None
) -> Generator[Tuple[_PredicateType, _ObjectType], None, None]:
    """A generator of (predicate, object) tuples for the given subject"""
    for t, c in self.triples((subject, None, None)):
        yield t[1], t[2]

predicates

predicates(subject: Optional[_SubjectType] = None, object: Optional[_ObjectType] = None) -> Generator[_PredicateType, None, None]

A generator of predicates with the given subject and object

Source code in rdflib/plugins/stores/sparqlstore.py
def predicates(
    self,
    subject: Optional[_SubjectType] = None,
    object: Optional[_ObjectType] = None,
) -> Generator[_PredicateType, None, None]:
    """A generator of predicates with the given subject and object"""
    for t, c in self.triples((subject, None, object)):
        yield t[1]

prefix

prefix(namespace: URIRef) -> Optional[str]
Source code in rdflib/plugins/stores/sparqlstore.py
def prefix(self, namespace: URIRef) -> Optional[str]:
    """ """
    return dict([(v, k) for k, v in self.nsBindings.items()]).get(namespace)

query

query(query: Union[Query, str], initNs: Optional[Mapping[str, Any]] = None, initBindings: Optional[Mapping[str, Identifier]] = None, queryGraph: Optional[str] = None, DEBUG: bool = False) -> Result
Source code in rdflib/plugins/stores/sparqlstore.py
def query(  # type: ignore[override]
    self,
    query: Union[Query, str],
    initNs: Optional[Mapping[str, Any]] = None,  # noqa: N803
    initBindings: Optional[Mapping[str, Identifier]] = None,  # noqa: N803
    queryGraph: Optional[str] = None,  # noqa: N803
    DEBUG: bool = False,  # noqa: N803
) -> Result:
    self.debug = DEBUG
    assert isinstance(query, str)

    if initNs is not None and len(initNs) > 0:
        query = self._inject_prefixes(query, initNs)

    if initBindings:
        if not self.sparql11:
            raise Exception("initBindings not supported for SPARQL 1.0 Endpoints.")
        v = list(initBindings)

        # VALUES was added to SPARQL 1.1 on 2012/07/24
        query += "\nVALUES ( %s )\n{ ( %s ) }\n" % (
            " ".join("?" + str(x) for x in v),
            " ".join(self.node_to_sparql(initBindings[x]) for x in v),
        )

    return self._query(
        query, default_graph=queryGraph if self._is_contextual(queryGraph) else None
    )

remove

remove(_: _TriplePatternType, context: Optional[_ContextType]) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def remove(  # type: ignore[override]
    self, _: _TriplePatternType, context: Optional[_ContextType]
) -> None:
    raise TypeError("The SPARQL store is read only")

remove_graph

remove_graph(graph: Graph) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def remove_graph(self, graph: Graph) -> None:
    raise TypeError("The SPARQL store is read only")

rollback

rollback() -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def rollback(self) -> None:
    raise TypeError("The SPARQL store is read only")

subject_objects

subject_objects(predicate: Optional[_PredicateType] = None) -> Generator[Tuple[_SubjectType, _ObjectType], None, None]

A generator of (subject, object) tuples for the given predicate

Source code in rdflib/plugins/stores/sparqlstore.py
def subject_objects(
    self, predicate: Optional[_PredicateType] = None
) -> Generator[Tuple[_SubjectType, _ObjectType], None, None]:
    """A generator of (subject, object) tuples for the given predicate"""
    for t, c in self.triples((None, predicate, None)):
        yield t[0], t[2]

subject_predicates

subject_predicates(object: Optional[_ObjectType] = None) -> Generator[Tuple[_SubjectType, _PredicateType], None, None]

A generator of (subject, predicate) tuples for the given object

Source code in rdflib/plugins/stores/sparqlstore.py
def subject_predicates(
    self, object: Optional[_ObjectType] = None
) -> Generator[Tuple[_SubjectType, _PredicateType], None, None]:
    """A generator of (subject, predicate) tuples for the given object"""
    for t, c in self.triples((None, None, object)):
        yield t[0], t[1]

subjects

subjects(predicate: Optional[_PredicateType] = None, object: Optional[_ObjectType] = None) -> Generator[_SubjectType, None, None]

A generator of subjects with the given predicate and object

Source code in rdflib/plugins/stores/sparqlstore.py
def subjects(
    self,
    predicate: Optional[_PredicateType] = None,
    object: Optional[_ObjectType] = None,
) -> Generator[_SubjectType, None, None]:
    """A generator of subjects with the given predicate and object"""
    for t, c in self.triples((None, predicate, object)):
        yield t[0]

triples

triples(spo: _TriplePatternType, context: Optional[_ContextType] = None) -> Iterator[Tuple[_TripleType, None]]
  • tuple (s, o, p) the triple used as filter for the SPARQL select. (None, None, None) means anything.
  • context context the graph effectively calling this method.

Returns a tuple of triples executing essentially a SPARQL like SELECT ?subj ?pred ?obj WHERE { ?subj ?pred ?obj }

context may include three parameter to refine the underlying query:

  • LIMIT: an integer to limit the number of results
  • OFFSET: an integer to enable paging of results
  • ORDERBY: an instance of Variable(’s’), Variable(‘o’) or Variable(‘p’) or, by default, the first ‘None’ from the given triple

!!! warning “Limit and offset

- Using LIMIT or OFFSET automatically include ORDERBY otherwise this is
  because the results are retrieved in a not deterministic way (depends on
  the walking path on the graph)
- Using OFFSET without defining LIMIT will discard the first OFFSET - 1 results
a_graph.LIMIT = limit
a_graph.OFFSET = offset
triple_generator = a_graph.triples(mytriple):
# do something
# Removes LIMIT and OFFSET if not required for the next triple() calls
del a_graph.LIMIT
del a_graph.OFFSET
Source code in rdflib/plugins/stores/sparqlstore.py
def triples(  # type: ignore[override]
    self, spo: _TriplePatternType, context: Optional[_ContextType] = None
) -> Iterator[Tuple[_TripleType, None]]:
    """
    - tuple **(s, o, p)**
      the triple used as filter for the SPARQL select.
      (None, None, None) means anything.
    - context **context**
      the graph effectively calling this method.

    Returns a tuple of triples executing essentially a SPARQL like
    SELECT ?subj ?pred ?obj WHERE { ?subj ?pred ?obj }

    **context** may include three parameter
    to refine the underlying query:

    * LIMIT: an integer to limit the number of results
    * OFFSET: an integer to enable paging of results
    * ORDERBY: an instance of Variable('s'), Variable('o') or Variable('p') or, by default, the first 'None' from the given triple

    !!! warning "Limit and offset

        - Using LIMIT or OFFSET automatically include ORDERBY otherwise this is
          because the results are retrieved in a not deterministic way (depends on
          the walking path on the graph)
        - Using OFFSET without defining LIMIT will discard the first OFFSET - 1 results

    ```python
    a_graph.LIMIT = limit
    a_graph.OFFSET = offset
    triple_generator = a_graph.triples(mytriple):
    # do something
    # Removes LIMIT and OFFSET if not required for the next triple() calls
    del a_graph.LIMIT
    del a_graph.OFFSET
    ```
    """

    s, p, o = spo

    vars = []
    if not s:
        s = Variable("s")
        vars.append(s)

    if not p:
        p = Variable("p")
        vars.append(p)
    if not o:
        o = Variable("o")
        vars.append(o)

    if vars:
        v = " ".join([term.n3() for term in vars])
        verb = "SELECT %s " % v
    else:
        verb = "ASK"

    nts = self.node_to_sparql
    query = "%s { %s %s %s }" % (verb, nts(s), nts(p), nts(o))

    # The ORDER BY is necessary
    if (
        hasattr(context, LIMIT)
        or hasattr(context, OFFSET)
        or hasattr(context, ORDERBY)
    ):
        var = None
        if isinstance(s, Variable):
            var = s
        elif isinstance(p, Variable):
            var = p
        elif isinstance(o, Variable):
            var = o
        elif hasattr(context, ORDERBY) and isinstance(
            getattr(context, ORDERBY), Variable
        ):
            var = getattr(context, ORDERBY)
        # type error: Item "None" of "Optional[Variable]" has no attribute "n3"
        query = query + " %s %s" % (ORDERBY, var.n3())  # type: ignore[union-attr]

    try:
        query = query + " LIMIT %s" % int(getattr(context, LIMIT))
    except (ValueError, TypeError, AttributeError):
        pass
    try:
        query = query + " OFFSET %s" % int(getattr(context, OFFSET))
    except (ValueError, TypeError, AttributeError):
        pass

    result = self._query(
        query,
        # type error: Item "None" of "Optional[Graph]" has no attribute "identifier"
        default_graph=context.identifier if self._is_contextual(context) else None,  # type: ignore[union-attr]
    )

    if vars:
        if type(result) is tuple:
            if result[0] == 401:
                raise ValueError(
                    "It looks like you need to authenticate with this SPARQL Store. HTTP unauthorized"
                )
        for row in result:
            if TYPE_CHECKING:
                # This will be a ResultRow because if vars is truthish then
                # the query will be a SELECT query.
                assert isinstance(row, ResultRow)
            yield (
                # type error: No overload variant of "get" of "ResultRow" matches argument types "Node", "Node"
                row.get(s, s),  # type: ignore[call-overload]
                row.get(p, p),  # type: ignore[call-overload]
                row.get(o, o),  # type: ignore[call-overload]
            ), None  # why is the context here not the passed in graph 'context'?
    else:
        if result.askAnswer:
            yield (s, p, o), None

triples_choices

triples_choices(_: _TripleChoiceType, context: Optional[_ContextType] = None) -> Generator[Tuple[Tuple[_SubjectType, _PredicateType, _ObjectType], Iterator[Optional[_ContextType]]], None, None]

A variant of triples that can take a list of terms instead of a single term in any slot. Stores can implement this to optimize the response time from the import default ‘fallback’ implementation, which will iterate over each term in the list and dispatch to triples.

Source code in rdflib/plugins/stores/sparqlstore.py
def triples_choices(
    self,
    _: _TripleChoiceType,
    context: Optional[_ContextType] = None,
) -> Generator[
    Tuple[
        Tuple[_SubjectType, _PredicateType, _ObjectType],
        Iterator[Optional[_ContextType]],
    ],
    None,
    None,
]:
    """
    A variant of triples that can take a list of terms instead of a
    single term in any slot.  Stores can implement this to optimize
    the response time from the import default 'fallback' implementation,
    which will iterate over each term in the list and dispatch to
    triples.
    """
    raise NotImplementedError("Triples choices currently not supported")

update

update(query: Union[Update, str], initNs: Dict[str, Any] = {}, initBindings: Dict[str, Identifier] = {}, queryGraph: Identifier = None, DEBUG: bool = False) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def update(  # type: ignore[override]
    self,
    query: Union[Update, str],
    initNs: Dict[str, Any] = {},  # noqa: N803
    initBindings: Dict[str, Identifier] = {},  # noqa: N803
    queryGraph: Identifier = None,  # noqa: N803
    DEBUG: bool = False,  # noqa: N803
) -> None:
    raise TypeError("The SPARQL store is read only")

SPARQLUpdateStore

SPARQLUpdateStore(query_endpoint: Optional[str] = None, update_endpoint: Optional[str] = None, sparql11: bool = True, context_aware: bool = True, postAsEncoded: bool = True, autocommit: bool = True, dirty_reads: bool = False, **kwds)

Bases: SPARQLStore

A store using SPARQL queries for reading and SPARQL Update for changes.

This can be context-aware, if so, any changes will be to the given named graph only.

In favor of the SPARQL 1.1 motivated Dataset, we advise against using this with ConjunctiveGraphs, as it reads and writes from and to the “default graph”. Exactly what this means depends on the endpoint and can result in confusion.

For Graph objects, everything works as expected.

See the SPARQLStore base class for more information.

    writing operations. If False, we only make queries on the
    server once commit is called.
dirty_reads if set, we do not commit before reading. So you
    cannot read what you wrote before manually calling commit.

Methods:

  • __len__
  • add

    Add a triple to the store of triples.

  • addN

    Add a list of quads to the store.

  • add_graph
  • commit

    add(), addN(), and remove() are transactional to reduce overhead of many small edits.

  • contexts
  • objects

    A generator of objects with the given subject and predicate

  • open

    Sets the endpoint URLs for this SPARQLStore

  • predicate_objects

    A generator of (predicate, object) tuples for the given subject

  • predicates

    A generator of predicates with the given subject and object

  • query
  • remove

    Remove a triple from the store

  • remove_graph
  • rollback
  • setTimeout
  • subject_objects

    A generator of (subject, object) tuples for the given predicate

  • subject_predicates

    A generator of (subject, predicate) tuples for the given object

  • subjects

    A generator of subjects with the given predicate and object

  • triples
  • update

    Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc.

Attributes:

Source code in rdflib/plugins/stores/sparqlstore.py
def __init__(
    self,
    query_endpoint: Optional[str] = None,
    update_endpoint: Optional[str] = None,
    sparql11: bool = True,
    context_aware: bool = True,
    postAsEncoded: bool = True,  # noqa: N803
    autocommit: bool = True,
    dirty_reads: bool = False,
    **kwds,
):
    """
    Args:
        autocommit: if set, the store will commit after every
            writing operations. If False, we only make queries on the
            server once commit is called.
        dirty_reads if set, we do not commit before reading. So you
            cannot read what you wrote before manually calling commit.
    """

    SPARQLStore.__init__(
        self,
        query_endpoint,
        sparql11,
        context_aware,
        update_endpoint=update_endpoint,
        **kwds,
    )

    self.postAsEncoded = postAsEncoded
    self.autocommit = autocommit
    self.dirty_reads = dirty_reads
    self._edits: Optional[List[str]] = None
    self._updates = 0

BLOCK_END class-attribute instance-attribute

BLOCK_END = '}'

BLOCK_FINDING_PATTERN class-attribute instance-attribute

BLOCK_FINDING_PATTERN = compile(BlockFinding)

BLOCK_START class-attribute instance-attribute

BLOCK_START = '{'

BlockContent class-attribute instance-attribute

BlockContent = '(%s)|(%s)|(%s)|(%s)' % (String, IRIREF, COMMENT, ESCAPED)

BlockFinding class-attribute instance-attribute

BlockFinding = '(?P<block_start>%s)|(?P<block_end>%s)|(?P<block_content>%s)' % (BLOCK_START, BLOCK_END, BlockContent)

COMMENT class-attribute instance-attribute

COMMENT = '#[^\\x0D\\x0A]*([\\x0D\\x0A]|\\Z)'

ESCAPED class-attribute instance-attribute

ESCAPED = '\\\\.'

IRIREF class-attribute instance-attribute

IRIREF = '<([^<>"{}|^`\\]\\\\[\\x00-\\x20])*>'

STRING_LITERAL1 class-attribute instance-attribute

STRING_LITERAL1 = "'([^'\\\\]|\\\\.)*'"

STRING_LITERAL2 class-attribute instance-attribute

STRING_LITERAL2 = '"([^"\\\\]|\\\\.)*"'

STRING_LITERAL_LONG1 class-attribute instance-attribute

STRING_LITERAL_LONG1 = "'''(('|'')?([^'\\\\]|\\\\.))*'''"

STRING_LITERAL_LONG2 class-attribute instance-attribute

STRING_LITERAL_LONG2 = '"""(("|"")?([^"\\\\]|\\\\.))*"""'

String class-attribute instance-attribute

autocommit instance-attribute

autocommit = autocommit

dirty_reads instance-attribute

dirty_reads = dirty_reads

postAsEncoded instance-attribute

postAsEncoded = postAsEncoded

where_pattern class-attribute instance-attribute

where_pattern = compile('(?P<where>WHERE\\s*\\{)', IGNORECASE)

__len__

__len__(*args: Any, **kwargs: Any) -> int
Source code in rdflib/plugins/stores/sparqlstore.py
def __len__(self, *args: Any, **kwargs: Any) -> int:
    if not self.autocommit and not self.dirty_reads:
        self.commit()
    return SPARQLStore.__len__(self, *args, **kwargs)

add

add(spo: _TripleType, context: Optional[_ContextType] = None, quoted: bool = False) -> None

Add a triple to the store of triples.

Source code in rdflib/plugins/stores/sparqlstore.py
def add(
    self,
    spo: _TripleType,
    context: Optional[_ContextType] = None,
    quoted: bool = False,
) -> None:
    """Add a triple to the store of triples."""

    if not self.update_endpoint:
        raise Exception("UpdateEndpoint is not set")

    assert not quoted
    (subject, predicate, obj) = spo

    nts = self.node_to_sparql
    triple = "%s %s %s ." % (nts(subject), nts(predicate), nts(obj))
    if self._is_contextual(context):
        if TYPE_CHECKING:
            # _is_contextual will never return true if context is None
            assert context is not None
        q = "INSERT DATA { GRAPH %s { %s } }" % (nts(context.identifier), triple)
    else:
        q = "INSERT DATA { %s }" % triple
    self._transaction().append(q)
    if self.autocommit:
        self.commit()

addN

addN(quads: Iterable[_QuadType]) -> None

Add a list of quads to the store.

Source code in rdflib/plugins/stores/sparqlstore.py
def addN(self, quads: Iterable[_QuadType]) -> None:  # noqa: N802
    """Add a list of quads to the store."""
    if not self.update_endpoint:
        raise Exception("UpdateEndpoint is not set - call 'open'")

    contexts = collections.defaultdict(list)
    for subject, predicate, obj, context in quads:
        contexts[context].append((subject, predicate, obj))
    data: List[str] = []
    nts = self.node_to_sparql
    for context in contexts:
        triples = [
            "%s %s %s ." % (nts(subject), nts(predicate), nts(obj))
            for subject, predicate, obj in contexts[context]
        ]
        data.append(
            "INSERT DATA { GRAPH %s { %s } }\n"
            % (nts(context.identifier), "\n".join(triples))
        )
    self._transaction().extend(data)
    if self.autocommit:
        self.commit()

add_graph

add_graph(graph: Graph) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def add_graph(self, graph: Graph) -> None:
    if not self.graph_aware:
        Store.add_graph(self, graph)
    elif graph.identifier != DATASET_DEFAULT_GRAPH_ID:
        self.update("CREATE GRAPH %s" % self.node_to_sparql(graph.identifier))

commit

commit() -> None

add(), addN(), and remove() are transactional to reduce overhead of many small edits. Read and update() calls will automatically commit any outstanding edits. This should behave as expected most of the time, except that alternating writes and reads can degenerate to the original call-per-triple situation that originally existed.

Source code in rdflib/plugins/stores/sparqlstore.py
def commit(self) -> None:
    """`add()`, `addN()`, and `remove()` are transactional to reduce overhead of many small edits.
    Read and update() calls will automatically commit any outstanding edits.
    This should behave as expected most of the time, except that alternating writes
    and reads can degenerate to the original call-per-triple situation that originally existed.
    """
    if self._edits and len(self._edits) > 0:
        self._update("\n;\n".join(self._edits))
        self._edits = None

contexts

contexts(*args: Any, **kwargs: Any) -> Generator[_ContextIdentifierType, None, None]
Source code in rdflib/plugins/stores/sparqlstore.py
def contexts(  # type: ignore[override]
    self, *args: Any, **kwargs: Any
) -> Generator[_ContextIdentifierType, None, None]:
    if not self.autocommit and not self.dirty_reads:
        self.commit()
    return SPARQLStore.contexts(self, *args, **kwargs)

objects

objects(subject: Optional[_SubjectType] = None, predicate: Optional[_PredicateType] = None) -> Generator[_ObjectType, None, None]

A generator of objects with the given subject and predicate

Source code in rdflib/plugins/stores/sparqlstore.py
def objects(
    self,
    subject: Optional[_SubjectType] = None,
    predicate: Optional[_PredicateType] = None,
) -> Generator[_ObjectType, None, None]:
    """A generator of objects with the given subject and predicate"""
    for t, c in self.triples((subject, predicate, None)):
        yield t[2]

open

open(configuration: Union[str, Tuple[str, str]], create: bool = False) -> None

Sets the endpoint URLs for this SPARQLStore

Parameters:

  • configuration

    (Union[str, Tuple[str, str]]) –

    either a tuple of (query_endpoint, update_endpoint), or a string with the endpoint which is configured as query and update endpoint

  • create

    (bool, default: False ) –

    if True an exception is thrown.

Source code in rdflib/plugins/stores/sparqlstore.py
def open(
    self, configuration: Union[str, Tuple[str, str]], create: bool = False
) -> None:
    """Sets the endpoint URLs for this `SPARQLStore`

    Args:
        configuration: either a tuple of (query_endpoint, update_endpoint),
            or a string with the endpoint which is configured as query and update endpoint
        create: if True an exception is thrown.
    """

    if create:
        raise Exception("Cannot create a SPARQL Endpoint")

    if isinstance(configuration, tuple):
        self.query_endpoint = configuration[0]
        if len(configuration) > 1:
            self.update_endpoint = configuration[1]
    else:
        self.query_endpoint = configuration
        self.update_endpoint = configuration

predicate_objects

predicate_objects(subject: Optional[_SubjectType] = None) -> Generator[Tuple[_PredicateType, _ObjectType], None, None]

A generator of (predicate, object) tuples for the given subject

Source code in rdflib/plugins/stores/sparqlstore.py
def predicate_objects(
    self, subject: Optional[_SubjectType] = None
) -> Generator[Tuple[_PredicateType, _ObjectType], None, None]:
    """A generator of (predicate, object) tuples for the given subject"""
    for t, c in self.triples((subject, None, None)):
        yield t[1], t[2]

predicates

predicates(subject: Optional[_SubjectType] = None, object: Optional[_ObjectType] = None) -> Generator[_PredicateType, None, None]

A generator of predicates with the given subject and object

Source code in rdflib/plugins/stores/sparqlstore.py
def predicates(
    self,
    subject: Optional[_SubjectType] = None,
    object: Optional[_ObjectType] = None,
) -> Generator[_PredicateType, None, None]:
    """A generator of predicates with the given subject and object"""
    for t, c in self.triples((subject, None, object)):
        yield t[1]

query

query(*args: Any, **kwargs: Any) -> Result
Source code in rdflib/plugins/stores/sparqlstore.py
def query(self, *args: Any, **kwargs: Any) -> Result:
    if not self.autocommit and not self.dirty_reads:
        self.commit()
    return SPARQLStore.query(self, *args, **kwargs)

remove

remove(spo: _TriplePatternType, context: Optional[_ContextType]) -> None

Remove a triple from the store

Source code in rdflib/plugins/stores/sparqlstore.py
def remove(  # type: ignore[override]
    self, spo: _TriplePatternType, context: Optional[_ContextType]
) -> None:
    """Remove a triple from the store"""
    if not self.update_endpoint:
        raise Exception("UpdateEndpoint is not set - call 'open'")

    (subject, predicate, obj) = spo
    if not subject:
        subject = Variable("S")
    if not predicate:
        predicate = Variable("P")
    if not obj:
        obj = Variable("O")

    nts = self.node_to_sparql
    triple = "%s %s %s ." % (nts(subject), nts(predicate), nts(obj))
    if self._is_contextual(context):
        if TYPE_CHECKING:
            # _is_contextual will never return true if context is None
            assert context is not None
        cid = nts(context.identifier)
        q = "WITH %(graph)s DELETE { %(triple)s } WHERE { %(triple)s }" % {
            "graph": cid,
            "triple": triple,
        }
    else:
        q = "DELETE { %s } WHERE { %s } " % (triple, triple)
    self._transaction().append(q)
    if self.autocommit:
        self.commit()

remove_graph

remove_graph(graph: Graph) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def remove_graph(self, graph: Graph) -> None:
    if not self.graph_aware:
        Store.remove_graph(self, graph)
    elif graph.identifier == DATASET_DEFAULT_GRAPH_ID:
        self.update("DROP DEFAULT")
    else:
        self.update("DROP GRAPH %s" % self.node_to_sparql(graph.identifier))

rollback

rollback() -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def rollback(self) -> None:
    self._edits = None

setTimeout

setTimeout(timeout) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
def setTimeout(self, timeout) -> None:  # noqa: N802
    self._timeout = int(timeout)

subject_objects

subject_objects(predicate: Optional[_PredicateType] = None) -> Generator[Tuple[_SubjectType, _ObjectType], None, None]

A generator of (subject, object) tuples for the given predicate

Source code in rdflib/plugins/stores/sparqlstore.py
def subject_objects(
    self, predicate: Optional[_PredicateType] = None
) -> Generator[Tuple[_SubjectType, _ObjectType], None, None]:
    """A generator of (subject, object) tuples for the given predicate"""
    for t, c in self.triples((None, predicate, None)):
        yield t[0], t[2]

subject_predicates

subject_predicates(object: Optional[_ObjectType] = None) -> Generator[Tuple[_SubjectType, _PredicateType], None, None]

A generator of (subject, predicate) tuples for the given object

Source code in rdflib/plugins/stores/sparqlstore.py
def subject_predicates(
    self, object: Optional[_ObjectType] = None
) -> Generator[Tuple[_SubjectType, _PredicateType], None, None]:
    """A generator of (subject, predicate) tuples for the given object"""
    for t, c in self.triples((None, None, object)):
        yield t[0], t[1]

subjects

subjects(predicate: Optional[_PredicateType] = None, object: Optional[_ObjectType] = None) -> Generator[_SubjectType, None, None]

A generator of subjects with the given predicate and object

Source code in rdflib/plugins/stores/sparqlstore.py
def subjects(
    self,
    predicate: Optional[_PredicateType] = None,
    object: Optional[_ObjectType] = None,
) -> Generator[_SubjectType, None, None]:
    """A generator of subjects with the given predicate and object"""
    for t, c in self.triples((None, predicate, object)):
        yield t[0]

triples

triples(*args: Any, **kwargs: Any) -> Iterator[Tuple[_TripleType, None]]
Source code in rdflib/plugins/stores/sparqlstore.py
def triples(  # type: ignore[override]
    self, *args: Any, **kwargs: Any
) -> Iterator[Tuple[_TripleType, None]]:
    if not self.autocommit and not self.dirty_reads:
        self.commit()
    return SPARQLStore.triples(self, *args, **kwargs)

update

update(query: Union[Update, str], initNs: Dict[str, Any] = {}, initBindings: Dict[str, Identifier] = {}, queryGraph: Optional[str] = None, DEBUG: bool = False)

Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc.

Setting initNs adds PREFIX declarations to the beginning of the update. Setting initBindings adds inline VALUEs to the beginning of every WHERE clause. By the SPARQL grammar, all operations that support variables (namely INSERT and DELETE) require a WHERE clause. Important: initBindings fails if the update contains the substring ‘WHERE {’ which does not denote a WHERE clause, e.g. if it is part of a literal.

Context-aware query rewriting

  • When: If context-awareness is enabled and the graph is not the default graph of the store.
  • Why: To ensure consistency with the Memory store. The graph must accept “local” SPARQL requests (requests with no GRAPH keyword) as if it was the default graph.
  • What is done: These “local” queries are rewritten by this store. The content of each block of a SPARQL Update operation is wrapped in a GRAPH block except if the block is empty. This basically causes INSERT, INSERT DATA, DELETE, DELETE DATA and WHERE to operate only on the context.
  • Example: "INSERT DATA { <urn:michel> <urn:likes> <urn:pizza> }" is converted into "INSERT DATA { GRAPH <urn:graph> { <urn:michel> <urn:likes> <urn:pizza> } }".
  • Warning: Queries are presumed to be “local” but this assumption is not checked. For instance, if the query already contains GRAPH blocks, the latter will be wrapped in new GRAPH blocks.
  • Warning: A simplified grammar is used that should tolerate extensions of the SPARQL grammar. Still, the process may fail in uncommon situations and produce invalid output.
Source code in rdflib/plugins/stores/sparqlstore.py
def update(  # type: ignore[override]
    self,
    query: Union[Update, str],
    initNs: Dict[str, Any] = {},  # noqa: N803
    initBindings: Dict[str, Identifier] = {},  # noqa: N803
    queryGraph: Optional[str] = None,  # noqa: N803
    DEBUG: bool = False,  # noqa: N803
):
    """Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc.

    Setting initNs adds PREFIX declarations to the beginning of
    the update. Setting initBindings adds inline VALUEs to the
    beginning of every WHERE clause. By the SPARQL grammar, all
    operations that support variables (namely INSERT and DELETE)
    require a WHERE clause.
    Important: initBindings fails if the update contains the
    substring 'WHERE {' which does not denote a WHERE clause, e.g.
    if it is part of a literal.

    !!! info "Context-aware query rewriting"

        - **When:**  If context-awareness is enabled and the graph is not the default graph of the store.
        - **Why:** To ensure consistency with the [`Memory`][rdflib.plugins.stores.memory.Memory] store.
            The graph must accept "local" SPARQL requests (requests with no GRAPH keyword)
            as if it was the default graph.
        - **What is done:** These "local" queries are rewritten by this store.
            The content of each block of a SPARQL Update operation is wrapped in a GRAPH block
            except if the block is empty.
            This basically causes INSERT, INSERT DATA, DELETE, DELETE DATA and WHERE to operate
            only on the context.
        - **Example:** `"INSERT DATA { <urn:michel> <urn:likes> <urn:pizza> }"` is converted into
            `"INSERT DATA { GRAPH <urn:graph> { <urn:michel> <urn:likes> <urn:pizza> } }"`.
        - **Warning:** Queries are presumed to be "local" but this assumption is **not checked**.
            For instance, if the query already contains GRAPH blocks, the latter will be wrapped in new GRAPH blocks.
        - **Warning:** A simplified grammar is used that should tolerate
            extensions of the SPARQL grammar. Still, the process may fail in
            uncommon situations and produce invalid output.
    """
    if not self.update_endpoint:
        raise Exception("Update endpoint is not set!")

    self.debug = DEBUG
    assert isinstance(query, str)
    query = self._inject_prefixes(query, initNs)

    if self._is_contextual(queryGraph):
        if TYPE_CHECKING:
            # _is_contextual will never return true if context is None
            assert queryGraph is not None
        query = self._insert_named_graph(query, queryGraph)

    if initBindings:
        # For INSERT and DELETE the WHERE clause is obligatory
        # (http://www.w3.org/TR/2013/REC-sparql11-query-20130321/#rModify)
        # Other query types do not allow variables and don't
        # have a WHERE clause.  This also works for updates with
        # more than one INSERT/DELETE.
        v = list(initBindings)
        values = "\nVALUES ( %s )\n{ ( %s ) }\n" % (
            " ".join("?" + str(x) for x in v),
            " ".join(self.node_to_sparql(initBindings[x]) for x in v),
        )

        query = self.where_pattern.sub("WHERE { " + values, query)

    self._transaction().append(query)
    if self.autocommit:
        self.commit()