base ::= '@base' IRIREF '.' Note: The '.' is consumed by n3Doc.
blankNode ::= BLANK_NODE_LABEL | anon
blankNodePropertyList ::= '[' predicateObjectList ']'
collection ::= '(' object* ')'
An array of recognition exceptions that occurred during parsing. This can be used to identify and handle any syntax errors in the input document.
expression ::= path
formula ::= '{' formulaContent? '}'
formulaContent ::= n3Statement ('.' formulaContent?)? | sparqlDirective formulaContent?
iri ::= IRIREF | prefixedName
literal ::= stringLiteral | numericLiteral | booleanLiteral
n3Directive ::= prefixID | base | forAll | forSome
n3Doc ::= (n3Statement '.' | sparqlDirective)* EOF
n3Statement ::= n3Directive | triples
A map of prefixes to their namespace IRI.
object ::= expression
objectList ::= object (',' object)*
path ::= pathItem ('!' path | '^' path)?
pathItem ::= iri | blankNode | quickVar | collection | blankNodePropertyList | literal | formula
predicate ::= expression | '<-' expression
predicateObjectList ::= verb objectList (';' (verb objectList)?)*
prefixID ::= '@prefix' PNAME_NS IRIREF '.' Note: The '.' is consumed by n3Doc.
prefixedName ::= PNAME_LN | PNAME_NS
N3 is more lenient than Turtle: it allows the empty prefix : to be
used without explicit declaration, implicitly resolving to <#>.
quickVar ::= '?' PN_CHARS_U PN_CHARS*
Flag indicating the Parser is at the recording phase. Can be used to implement methods similar to BaseParser.ACTION Or any other logic to requires knowledge of the recording phase. See:
Semantic errors collected during parsing (e.g., UndefinedNamespacePrefixError).
sparqlBase ::= 'BASE' IRIREF
sparqlDirective ::= sparqlPrefix | sparqlBase
sparqlPrefix ::= 'PREFIX' PNAME_NS IRIREF
subject ::= expression
triples ::= subject predicateObjectList?
In N3, subjects with zero predicates are valid (e.g., :a .).
verb ::= predicate | 'a' | 'has' expression | 'is' expression 'of' | '=' | '<=' | '=>'
An array of tokens that were created by the lexer and used as input for the parser. This can be used to inspect the tokens that were processed during parsing, and to identify any issues with the tokenization process.
An array of tokens that were created by the lexer and used as input for the parser. This can be used to inspect the tokens that were processed during parsing, and to identify any issues with the tokenization process.
Parses a set of tokens created by the lexer into a concrete syntax tree (CST) representing the parsed document.
A set of tokens created by the lexer.
Whether to throw an error if any parsing errors are detected. Defaults to true.
A concrete syntax tree (CST) object.
Resets the parser state, should be overridden for custom parsers which "carry" additional state. When overriding, remember to also invoke the super implementation!
A W3C compliant parser for the N3 (Notation3) syntax. Based on the N3 grammar: https://w3c.github.io/N3/spec/