@faubulous/mentor-rdf-parsers
    Preparing search index...

    Class NQuadsParser

    A W3C compliant parser for the N-Quads syntax. https://www.w3.org/TR/n-quads

    Hierarchy

    • NTriplesParserBase
      • NQuadsParser

    Implements

    Index

    Constructors

    Properties

    datatype: ParserMethod<[], CstNode> = ...

    An array of recognition exceptions that occurred during parsing. This can be used to identify and handle any syntax errors in the input document.

    graphLabel: ParserMethod<[], CstNode> = ...
    literal: ParserMethod<[], CstNode> = ...
    nquadsDoc: ParserMethod<[], CstNode> = ...
    object: ParserMethod<[], CstNode> = ...
    predicate: ParserMethod<[], CstNode> = ...
    RECORDING_PHASE: boolean

    Flag indicating the Parser is at the recording phase. Can be used to implement methods similar to BaseParser.ACTION Or any other logic to requires knowledge of the recording phase. See:

    semanticErrors: IRecognitionException[] = []

    An array of recognition exceptions that occurred during semantic analysis. This can be used to identify and handle any semantic errors in the input document, such as undefined prefixes or invalid IRIs.

    statement: ParserMethod<[], CstNode> = ...
    subject: ParserMethod<[], CstNode> = ...
    tripleTerm: ParserMethod<[], CstNode> = ...

    https://www.w3.org/TR/rdf12-n-triples/#grammar-production-tripleTerm tripleTerm ::= '<<(' subject predicate object ')>>'

    versionDirective: ParserMethod<[], CstNode> = ...

    https://www.w3.org/TR/rdf12-n-quads/#grammar-production-versionDirective versionDirective ::= 'VERSION' versionSpecifier

    Accessors

    • get input(): IToken[]

      An array of tokens that were created by the lexer and used as input for the parser. This can be used to inspect the tokens that were processed during parsing, and to identify any issues with the tokenization process.

      Returns IToken[]

    • set input(value: IToken[]): void

      An array of tokens that were created by the lexer and used as input for the parser. This can be used to inspect the tokens that were processed during parsing, and to identify any issues with the tokenization process.

      Parameters

      • value: IToken[]

      Returns void

    Methods

    • Type Parameters

      • IN = any
      • OUT = any

      Returns new (...args: any[]) => ICstVisitor<IN, OUT>

    • Type Parameters

      • IN = any
      • OUT = any

      Returns new (...args: any[]) => ICstVisitor<IN, OUT>

    • Returns Record<string, Rule>

    • Returns ISerializedGast[]

    • /**

      • Parses a set of tokens created by the lexer into a concrete syntax tree (CST) representing the parsed document.

      Parameters

      • tokens: IToken[]

        A set of tokens created by the lexer. *

      • throwOnErrors: boolean = true

        Whether to throw an error if any parsing errors are detected. Defaults to true. *

      Returns CstNode

      A concrete syntax tree (CST) object.

    • Resets the parser state, should be overridden for custom parsers which "carry" additional state. When overriding, remember to also invoke the super implementation!

      Returns void