sparqlstore
This is an RDFLib store around Ivan Herman et al.’s SPARQL service wrapper. This was first done in layer-cake, and then ported to RDFLib
Classes:
-
SPARQLStore–An RDFLib store around a SPARQL endpoint
-
SPARQLUpdateStore–A store using SPARQL queries for reading and SPARQL Update for changes.
SPARQLStore
SPARQLStore(query_endpoint: Optional[str] = None, sparql11: bool = True, context_aware: bool = True, node_to_sparql: _NodeToSparql = _node_to_sparql, returnFormat: Optional[str] = 'xml', auth: Optional[Tuple[str, str]] = None, **sparqlconnector_kwargs)
Bases: SPARQLConnector, Store
An RDFLib store around a SPARQL endpoint
This is context-aware and should work as expected when a context is specified.
For ConjunctiveGraphs, reading is done from the “default graph”. Exactly what this means depends on your endpoint, because SPARQL does not offer a simple way to query the union of all graphs as it would be expected for a ConjuntiveGraph. This is why we recommend using Dataset instead, which is motivated by the SPARQL 1.1.
Fuseki/TDB has a flag for specifying that the default graph
is the union of all graphs (tdb:unionDefaultGraph in the Fuseki config).
!!! warning “Blank nodes
By default the SPARQL Store does not support blank-nodes!
As blank-nodes act as variables in SPARQL queries,
there is no way to query for a particular blank node without
using non-standard SPARQL extensions.
See http://www.w3.org/TR/sparql11-query/#BGPsparqlBNodes
You can make use of such extensions through the node_to_sparql
argument. For example if you want to transform BNode(‘0001’) into
“
>>> def my_bnode_ext(node):
... if isinstance(node, BNode):
... return '<bnode:b%s>' % node
... return _node_to_sparql(node)
>>> store = SPARQLStore('http://dbpedia.org/sparql',
... node_to_sparql=my_bnode_ext)
You can request a particular result serialization with the
returnFormat parameter. This is a string that must have a
matching plugin registered. Built in is support for xml,
json, csv, tsv and application/rdf+xml.
The underlying SPARQLConnector uses the urllib library. Any extra kwargs passed to the SPARQLStore connector are passed to urllib when doing HTTP calls. I.e. you have full control of cookies/auth/headers.
Form example:
will use HTTP basic auth.
Methods:
-
__len__– -
add– -
addN– -
add_graph– -
bind– -
commit– -
contexts–Iterates over results to
SELECT ?NAME { GRAPH ?NAME { ?s ?p ?o } } -
create– -
destroy– -
namespace– -
namespaces– -
objects–A generator of objects with the given subject and predicate
-
open–This method is included so that calls to this Store via Graph, e.g. Graph(“SPARQLStore”),
-
predicate_objects–A generator of (predicate, object) tuples for the given subject
-
predicates–A generator of predicates with the given subject and object
-
prefix– -
query– -
remove– -
remove_graph– -
rollback– -
subject_objects–A generator of (subject, object) tuples for the given predicate
-
subject_predicates–A generator of (subject, predicate) tuples for the given object
-
subjects–A generator of subjects with the given predicate and object
-
triples–- tuple (s, o, p)
-
triples_choices–A variant of triples that can take a list of terms instead of a
-
update–
Attributes:
-
context_aware– -
formula_aware– -
graph_aware– -
node_to_sparql– -
nsBindings(Dict[str, Any]) – -
regex_matching– -
sparql11– -
transaction_aware–
Source code in rdflib/plugins/stores/sparqlstore.py
__len__
Source code in rdflib/plugins/stores/sparqlstore.py
add
addN
add_graph
add_graph(graph: Graph) -> None
bind
bind(prefix: str, namespace: URIRef, override: bool = True) -> None
commit
contexts
Iterates over results to SELECT ?NAME { GRAPH ?NAME { ?s ?p ?o } }
or SELECT ?NAME { GRAPH ?NAME {} } if triple is None.
Returns instances of this store with the SPARQL wrapper object updated via addNamedGraph(?NAME).
This causes a named-graph-uri key / value pair to be sent over the protocol.
Please note that some SPARQL endpoints are not able to find empty named graphs.
Source code in rdflib/plugins/stores/sparqlstore.py
create
destroy
namespace
namespace(prefix: str) -> Optional[URIRef]
namespaces
namespaces() -> Iterator[Tuple[str, URIRef]]
objects
objects(subject: Optional[_SubjectType] = None, predicate: Optional[_PredicateType] = None) -> Generator[_ObjectType, None, None]
A generator of objects with the given subject and predicate
Source code in rdflib/plugins/stores/sparqlstore.py
open
This method is included so that calls to this Store via Graph, e.g. Graph(“SPARQLStore”), can set the required parameters
Source code in rdflib/plugins/stores/sparqlstore.py
predicate_objects
predicate_objects(subject: Optional[_SubjectType] = None) -> Generator[Tuple[_PredicateType, _ObjectType], None, None]
A generator of (predicate, object) tuples for the given subject
Source code in rdflib/plugins/stores/sparqlstore.py
predicates
predicates(subject: Optional[_SubjectType] = None, object: Optional[_ObjectType] = None) -> Generator[_PredicateType, None, None]
A generator of predicates with the given subject and object
Source code in rdflib/plugins/stores/sparqlstore.py
prefix
prefix(namespace: URIRef) -> Optional[str]
query
query(query: Union[Query, str], initNs: Optional[Mapping[str, Any]] = None, initBindings: Optional[Mapping[str, Identifier]] = None, queryGraph: Optional[str] = None, DEBUG: bool = False) -> Result
Source code in rdflib/plugins/stores/sparqlstore.py
remove
remove_graph
remove_graph(graph: Graph) -> None
rollback
subject_objects
subject_objects(predicate: Optional[_PredicateType] = None) -> Generator[Tuple[_SubjectType, _ObjectType], None, None]
A generator of (subject, object) tuples for the given predicate
Source code in rdflib/plugins/stores/sparqlstore.py
subject_predicates
subject_predicates(object: Optional[_ObjectType] = None) -> Generator[Tuple[_SubjectType, _PredicateType], None, None]
A generator of (subject, predicate) tuples for the given object
Source code in rdflib/plugins/stores/sparqlstore.py
subjects
subjects(predicate: Optional[_PredicateType] = None, object: Optional[_ObjectType] = None) -> Generator[_SubjectType, None, None]
A generator of subjects with the given predicate and object
Source code in rdflib/plugins/stores/sparqlstore.py
triples
triples(spo: _TriplePatternType, context: Optional[_ContextType] = None) -> Iterator[Tuple[_TripleType, None]]
- tuple (s, o, p) the triple used as filter for the SPARQL select. (None, None, None) means anything.
- context context the graph effectively calling this method.
Returns a tuple of triples executing essentially a SPARQL like SELECT ?subj ?pred ?obj WHERE { ?subj ?pred ?obj }
context may include three parameter to refine the underlying query:
- LIMIT: an integer to limit the number of results
- OFFSET: an integer to enable paging of results
- ORDERBY: an instance of Variable(’s’), Variable(‘o’) or Variable(‘p’) or, by default, the first ‘None’ from the given triple
!!! warning “Limit and offset
- Using LIMIT or OFFSET automatically include ORDERBY otherwise this is
because the results are retrieved in a not deterministic way (depends on
the walking path on the graph)
- Using OFFSET without defining LIMIT will discard the first OFFSET - 1 results
a_graph.LIMIT = limit
a_graph.OFFSET = offset
triple_generator = a_graph.triples(mytriple):
# do something
# Removes LIMIT and OFFSET if not required for the next triple() calls
del a_graph.LIMIT
del a_graph.OFFSET
Source code in rdflib/plugins/stores/sparqlstore.py
258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 | |
triples_choices
triples_choices(_: _TripleChoiceType, context: Optional[_ContextType] = None) -> Generator[Tuple[Tuple[_SubjectType, _PredicateType, _ObjectType], Iterator[Optional[_ContextType]]], None, None]
A variant of triples that can take a list of terms instead of a single term in any slot. Stores can implement this to optimize the response time from the import default ‘fallback’ implementation, which will iterate over each term in the list and dispatch to triples.
Source code in rdflib/plugins/stores/sparqlstore.py
update
update(query: Union[Update, str], initNs: Dict[str, Any] = {}, initBindings: Dict[str, Identifier] = {}, queryGraph: Identifier = None, DEBUG: bool = False) -> None
Source code in rdflib/plugins/stores/sparqlstore.py
SPARQLUpdateStore
SPARQLUpdateStore(query_endpoint: Optional[str] = None, update_endpoint: Optional[str] = None, sparql11: bool = True, context_aware: bool = True, postAsEncoded: bool = True, autocommit: bool = True, dirty_reads: bool = False, **kwds)
Bases: SPARQLStore
A store using SPARQL queries for reading and SPARQL Update for changes.
This can be context-aware, if so, any changes will be to the given named graph only.
In favor of the SPARQL 1.1 motivated Dataset, we advise against using this with ConjunctiveGraphs, as it reads and writes from and to the “default graph”. Exactly what this means depends on the endpoint and can result in confusion.
For Graph objects, everything works as expected.
See the SPARQLStore base class for more information.
writing operations. If False, we only make queries on the
server once commit is called.
dirty_reads if set, we do not commit before reading. So you
cannot read what you wrote before manually calling commit.
Methods:
-
__len__– -
add–Add a triple to the store of triples.
-
addN–Add a list of quads to the store.
-
add_graph– -
commit–add(),addN(), andremove()are transactional to reduce overhead of many small edits. -
contexts– -
objects–A generator of objects with the given subject and predicate
-
open–Sets the endpoint URLs for this
SPARQLStore -
predicate_objects–A generator of (predicate, object) tuples for the given subject
-
predicates–A generator of predicates with the given subject and object
-
query– -
remove–Remove a triple from the store
-
remove_graph– -
rollback– -
setTimeout– -
subject_objects–A generator of (subject, object) tuples for the given predicate
-
subject_predicates–A generator of (subject, predicate) tuples for the given object
-
subjects–A generator of subjects with the given predicate and object
-
triples– -
update–Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc.
Attributes:
-
BLOCK_END– -
BLOCK_FINDING_PATTERN– -
BLOCK_START– -
BlockContent– -
BlockFinding– -
COMMENT– -
ESCAPED– -
IRIREF– -
STRING_LITERAL1– -
STRING_LITERAL2– -
STRING_LITERAL_LONG1– -
STRING_LITERAL_LONG2– -
String– -
autocommit– -
dirty_reads– -
postAsEncoded– -
where_pattern–
Source code in rdflib/plugins/stores/sparqlstore.py
BLOCK_FINDING_PATTERN
class-attribute
instance-attribute
BLOCK_FINDING_PATTERN = compile(BlockFinding)
BlockContent
class-attribute
instance-attribute
BlockFinding
class-attribute
instance-attribute
BlockFinding = '(?P<block_start>%s)|(?P<block_end>%s)|(?P<block_content>%s)' % (BLOCK_START, BLOCK_END, BlockContent)
STRING_LITERAL_LONG1
class-attribute
instance-attribute
STRING_LITERAL_LONG2
class-attribute
instance-attribute
String
class-attribute
instance-attribute
String = '(%s)|(%s)|(%s)|(%s)' % (STRING_LITERAL1, STRING_LITERAL2, STRING_LITERAL_LONG1, STRING_LITERAL_LONG2)
where_pattern
class-attribute
instance-attribute
__len__
add
Add a triple to the store of triples.
Source code in rdflib/plugins/stores/sparqlstore.py
addN
Add a list of quads to the store.
Source code in rdflib/plugins/stores/sparqlstore.py
add_graph
add_graph(graph: Graph) -> None
commit
add(), addN(), and remove() are transactional to reduce overhead of many small edits.
Read and update() calls will automatically commit any outstanding edits.
This should behave as expected most of the time, except that alternating writes
and reads can degenerate to the original call-per-triple situation that originally existed.
Source code in rdflib/plugins/stores/sparqlstore.py
contexts
Source code in rdflib/plugins/stores/sparqlstore.py
objects
objects(subject: Optional[_SubjectType] = None, predicate: Optional[_PredicateType] = None) -> Generator[_ObjectType, None, None]
A generator of objects with the given subject and predicate
Source code in rdflib/plugins/stores/sparqlstore.py
open
open(configuration: Union[str, Tuple[str, str]], create: bool = False) -> None
Sets the endpoint URLs for this SPARQLStore
Parameters:
-
(configurationUnion[str, Tuple[str, str]]) –either a tuple of (query_endpoint, update_endpoint), or a string with the endpoint which is configured as query and update endpoint
-
(createbool, default:False) –if True an exception is thrown.
Source code in rdflib/plugins/stores/sparqlstore.py
predicate_objects
predicate_objects(subject: Optional[_SubjectType] = None) -> Generator[Tuple[_PredicateType, _ObjectType], None, None]
A generator of (predicate, object) tuples for the given subject
Source code in rdflib/plugins/stores/sparqlstore.py
predicates
predicates(subject: Optional[_SubjectType] = None, object: Optional[_ObjectType] = None) -> Generator[_PredicateType, None, None]
A generator of predicates with the given subject and object
Source code in rdflib/plugins/stores/sparqlstore.py
query
query(*args: Any, **kwargs: Any) -> Result
remove
Remove a triple from the store
Source code in rdflib/plugins/stores/sparqlstore.py
rollback
setTimeout
subject_objects
subject_objects(predicate: Optional[_PredicateType] = None) -> Generator[Tuple[_SubjectType, _ObjectType], None, None]
A generator of (subject, object) tuples for the given predicate
Source code in rdflib/plugins/stores/sparqlstore.py
subject_predicates
subject_predicates(object: Optional[_ObjectType] = None) -> Generator[Tuple[_SubjectType, _PredicateType], None, None]
A generator of (subject, predicate) tuples for the given object
Source code in rdflib/plugins/stores/sparqlstore.py
subjects
subjects(predicate: Optional[_PredicateType] = None, object: Optional[_ObjectType] = None) -> Generator[_SubjectType, None, None]
A generator of subjects with the given predicate and object
Source code in rdflib/plugins/stores/sparqlstore.py
triples
update
update(query: Union[Update, str], initNs: Dict[str, Any] = {}, initBindings: Dict[str, Identifier] = {}, queryGraph: Optional[str] = None, DEBUG: bool = False)
Perform a SPARQL Update Query against the endpoint, INSERT, LOAD, DELETE etc.
Setting initNs adds PREFIX declarations to the beginning of the update. Setting initBindings adds inline VALUEs to the beginning of every WHERE clause. By the SPARQL grammar, all operations that support variables (namely INSERT and DELETE) require a WHERE clause. Important: initBindings fails if the update contains the substring ‘WHERE {’ which does not denote a WHERE clause, e.g. if it is part of a literal.
Context-aware query rewriting
- When: If context-awareness is enabled and the graph is not the default graph of the store.
- Why: To ensure consistency with the
Memorystore. The graph must accept “local” SPARQL requests (requests with no GRAPH keyword) as if it was the default graph. - What is done: These “local” queries are rewritten by this store. The content of each block of a SPARQL Update operation is wrapped in a GRAPH block except if the block is empty. This basically causes INSERT, INSERT DATA, DELETE, DELETE DATA and WHERE to operate only on the context.
- Example:
"INSERT DATA { <urn:michel> <urn:likes> <urn:pizza> }"is converted into"INSERT DATA { GRAPH <urn:graph> { <urn:michel> <urn:likes> <urn:pizza> } }". - Warning: Queries are presumed to be “local” but this assumption is not checked. For instance, if the query already contains GRAPH blocks, the latter will be wrapped in new GRAPH blocks.
- Warning: A simplified grammar is used that should tolerate extensions of the SPARQL grammar. Still, the process may fail in uncommon situations and produce invalid output.