Discussion:
SPIN support
Boris Marcelo Villazon Terrazas
2016-12-08 22:59:56 UTC
Permalink
Hi all

I would like to know if there is a plan to integrate SPIN in Fuseki?
If yes, what is the status? can external people contribute to this?

Thank you in advance and regards
Boris
Rob Vesse
2016-12-09 09:47:29 UTC
Permalink
There is no plan that I know of

While SPIN may advertise itself as an open standard it is primarily the product of one single vendor. Apache Jena focuses on implementing standards so that we do not tie users into any specific vendor.

It is probably possible to do this integration yourself although you may run into version incompatibility issues because SPIN has often lagged behind the latest Jena releases

Rob

On 08/12/2016 22:59, "Boris Marcelo Villazon Terrazas" <***@gmail.com> wrote:

Hi all

I would like to know if there is a plan to integrate SPIN in Fuseki?
If yes, what is the status? can external people contribute to this?

Thank you in advance and regards
Boris
Will
2016-12-09 09:53:15 UTC
Permalink
"It is probably possible to do this integration yourself although you may run into version incompatibility issues because SPIN has often lagged behind the latest Jena releases"


SPIN is definitely a useful complement to the suite of reasoning profiles, but in my experience integration has always resulted in dependency hell.


________________________________
From: Rob Vesse <***@dotnetrdf.org>
Sent: 09 December 2016 09:47
To: ***@jena.apache.org
Subject: Re: SPIN support

There is no plan that I know of

While SPIN may advertise itself as an open standard it is primarily the product of one single vendor. Apache Jena focuses on implementing standards so that we do not tie users into any specific vendor.

It is probably possible to do this integration yourself although you may run into version incompatibility issues because SPIN has often lagged behind the latest Jena releases

Rob

On 08/12/2016 22:59, "Boris Marcelo Villazon Terrazas" <***@gmail.com> wrote:

Hi all

I would like to know if there is a plan to integrate SPIN in Fuseki?
If yes, what is the status? can external people contribute to this?

Thank you in advance and regards
Boris
Holger Knublauch
2016-12-09 09:55:19 UTC
Permalink
Which aspect of SPIN is of interest here: rules or constraints?

Sent from my iPad
Post by Rob Vesse
There is no plan that I know of
While SPIN may advertise itself as an open standard it is primarily the product of one single vendor. Apache Jena focuses on implementing standards so that we do not tie users into any specific vendor.
It is probably possible to do this integration yourself although you may run into version incompatibility issues because SPIN has often lagged behind the latest Jena releases
Rob
Hi all
I would like to know if there is a plan to integrate SPIN in Fuseki?
If yes, what is the status? can external people contribute to this?
Thank you in advance and regards
Boris
Boris Marcelo Villazon Terrazas
2016-12-09 21:30:53 UTC
Permalink
Thank you all for your replies.
@Holger, initially I would be interested on rules.
Any ideas about that?
Post by Holger Knublauch
Which aspect of SPIN is of interest here: rules or constraints?
Sent from my iPad
Post by Rob Vesse
There is no plan that I know of
While SPIN may advertise itself as an open standard it is primarily the
product of one single vendor. Apache Jena focuses on implementing
standards so that we do not tie users into any specific vendor.
Post by Rob Vesse
It is probably possible to do this integration yourself although you may
run into version incompatibility issues because SPIN has often lagged
behind the latest Jena releases
Post by Rob Vesse
Rob
On 08/12/2016 22:59, "Boris Marcelo Villazon Terrazas" <
Hi all
I would like to know if there is a plan to integrate SPIN in Fuseki?
If yes, what is the status? can external people contribute to this?
Thank you in advance and regards
Boris
Andy Seaborne
2016-12-30 16:24:14 UTC
Permalink
Post by Boris Marcelo Villazon Terrazas
Thank you all for your replies.
@Holger, initially I would be interested on rules.
Any ideas about that?
First, some code:

https://github.com/spinrdf/spinrdf
Post by Boris Marcelo Villazon Terrazas
can external people contribute to this?
Yes. This codebase is an independent open source github project. It is
Apache licensed; it is not an Apache project. It needs sorting out so
all contributions will be gratefully received.


For integration with Fuseki there are a couple of things needed:

When to apply the rules. Presumably, the rules should be to be applied
after any change and inside the transaction that is making the change.

A general trigger mechanism in the write-lifecycle is the right place.
SPIN would be one possible trigger. Maybe in HttpAction.commit, maybe in
HttpAction.endWrite.

Then there is the configuration - assembler additions for attaching SPIN
to a dataset or maybe a service? so you can have a SPIN and non-SPIN
update? There are some details to think about here. The SPIN assembler
additions would be best in the spinrdf/spinrdf project.

Andy
Post by Boris Marcelo Villazon Terrazas
Post by Holger Knublauch
Which aspect of SPIN is of interest here: rules or constraints?
Sent from my iPad
Post by Rob Vesse
There is no plan that I know of
While SPIN may advertise itself as an open standard it is primarily the
product of one single vendor. Apache Jena focuses on implementing
standards so that we do not tie users into any specific vendor.
Post by Rob Vesse
It is probably possible to do this integration yourself although you may
run into version incompatibility issues because SPIN has often lagged
behind the latest Jena releases
Post by Rob Vesse
Rob
On 08/12/2016 22:59, "Boris Marcelo Villazon Terrazas" <
Hi all
I would like to know if there is a plan to integrate SPIN in Fuseki?
If yes, what is the status? can external people contribute to this?
Thank you in advance and regards
Boris
james anderson
2016-12-30 16:35:30 UTC
Permalink
good evening;
[
]
When to apply the rules. Presumably, the rules should be to be applied after any change and inside the transaction that is making the change.
A general trigger mechanism in the write-lifecycle is the right place. SPIN would be one possible trigger. Maybe in HttpAction.commit, maybe in HttpAction.endWrite.
beyond the significance of a second run-time context, this part is of particular value as the answers to questions surrounding interoperable invocation mechanisms were notoriously murky.

best regards, from berlin,


---
james anderson | ***@dydra.com | http://dydra.com
Adrian Gschwend
2017-09-04 18:19:25 UTC
Permalink
On 30.12.16 17:24, Andy Seaborne wrote:

Hi Andy,
it's a while ago, did you make any progress on SPIN API support in
Fuseki? Some stuff in SPIN would be useful.

regards

Adrina
Holger Knublauch
2017-09-04 22:07:01 UTC
Permalink
Hi Adrian,

out of interest: which features of SPIN would be of interest, and which
of these are not covered by SHACL?

See also http://spinrdf.org/spin-shacl.html

Holger
Post by Adrian Gschwend
Hi Andy,
it's a while ago, did you make any progress on SPIN API support in
Fuseki? Some stuff in SPIN would be useful.
regards
Adrina
Adrian Walker
2017-09-05 04:58:01 UTC
Permalink
Hi Holger

I think your note was intended for Adrian Gschwend [spelling?], not me.

HTH.

Adrian Walker
Reengineering LLC
San Jose, CA, USA
860 830 2085
www.executable-english.com
Post by Holger Knublauch
Hi Adrian,
out of interest: which features of SPIN would be of interest, and which of
these are not covered by SHACL?
See also http://spinrdf.org/spin-shacl.html
Holger
Post by Adrian Gschwend
Hi Andy,
it's a while ago, did you make any progress on SPIN API support in
Fuseki? Some stuff in SPIN would be useful.
regards
Adrina
Adrian Gschwend
2017-09-05 07:03:10 UTC
Permalink
On 05.09.17 00:07, Holger Knublauch wrote:

Hi Holger,
Post by Holger Knublauch
out of interest: which features of SPIN would be of interest, and which
of these are not covered by SHACL?
See also http://spinrdf.org/spin-shacl.html
A simple one actually :) I saw that there is a camelcase function in
SPIN and that's something which would be very useful for cleanup-jobs.
And as I mainly do those within Fuseki I would like to have the function
available in there.

regards

Adrian
Andy Seaborne
2017-09-05 09:58:40 UTC
Permalink
Post by Adrian Gschwend
Hi Andy,
it's a while ago, did you make any progress on SPIN API support in
Hi Adrian,

No progress from me - my open-source time has been going elsewhere (TDB2
- see ***@jena email yesterday).

It needs more contributors to engage and see though a design process for
Fuseki, such as assembler and server config integration).
Post by Adrian Gschwend
Fuseki? Some stuff in SPIN would be useful.
regards
Adrian
A simple one actually :) I saw that there is a camelcase function in
SPIN
This is a restricted case where maybe there is another way?

Would a custom function do the same?

It would be interesting to have customer functions in javascript
(non-compiled java) that could be loaded into a server via the "server"
section of the Fuseki configuration file.

Andy
Adrian Gschwend
2017-09-05 13:42:05 UTC
Permalink
On 05.09.17 11:58, Andy Seaborne wrote:

Hi Andy,
Post by Andy Seaborne
No progress from me - my open-source time has been going elsewhere (TDB2
oh great, will check that out
Post by Andy Seaborne
This is a restricted case where maybe there is another way?
Would a custom function do the same?
yes, absolutely
Post by Andy Seaborne
It would be interesting to have customer functions in javascript
(non-compiled java) that could be loaded into a server via the "server"
section of the Fuseki configuration file.
wow that would be very cool. Like this I see a chance to be able to
provide some custom functions myself! With Java I'm out.

regards

Adrian
Holger Knublauch
2017-09-05 22:21:41 UTC
Permalink
Post by Adrian Gschwend
Hi Andy,
Post by Andy Seaborne
No progress from me - my open-source time has been going elsewhere (TDB2
oh great, will check that out
Post by Andy Seaborne
This is a restricted case where maybe there is another way?
Would a custom function do the same?
yes, absolutely
Post by Andy Seaborne
It would be interesting to have customer functions in javascript
(non-compiled java) that could be loaded into a server via the "server"
section of the Fuseki configuration file.
wow that would be very cool. Like this I see a chance to be able to
provide some custom functions myself! With Java I'm out.
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see

https://w3c.github.io/data-shapes/shacl-js/#js-functions

Holger
Adrian Gschwend
2017-09-06 09:45:54 UTC
Permalink
On 06.09.17 00:21, Holger Knublauch wrote:

Hi Holger,
Post by Holger Knublauch
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see
https://w3c.github.io/data-shapes/shacl-js/#js-functions
ok but that still expects the SPARQL engine to support that right?

regards

Adrian
Holger Knublauch
2017-09-06 21:37:35 UTC
Permalink
Post by Adrian Gschwend
Hi Holger,
Post by Holger Knublauch
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see
https://w3c.github.io/data-shapes/shacl-js/#js-functions
ok but that still expects the SPARQL engine to support that right?
This is implemented for Jena's SPARQL functions registry here

https://github.com/TopQuadrant/shacl/blob/master/src/main/java/org/topbraid/shacl/arq/SHACLJSARQFunction.java

Holger
Post by Adrian Gschwend
regards
Adrian
Andy Seaborne
2017-09-07 13:20:31 UTC
Permalink
The nice things about JS functions it allows extension without java
programming, whether writing the custom function in java or having to
rebuild Fuseki to get the java code in. war files and jar+dependencies
(run with -jar) are sealed. And it isn't as hard as embedded Java like
JSPs.

Just restricting the thoughts to SPARQL functions in JS - list of args
in, single value out. (So not property functions, not modifying the
graph data itself.)

I came up with 3 different designs of the function calling model based
on what is passed in.

1/ Pass in JS values - strings, numbers, booleans, null.

Convert the function arguments as passed in from SPARQL into JS native
things. Also, convert the return to an XSD values by inspection. The JS
writer is insulated from RDF details.

Works really nicely for strings.
Everything could be a string, and the dynamic nature of JS will work
(caveat the overheads for simple functions called many, many times).
(Enough said about numbers in JS!)

Other items (URIs, bNodes) can be some kind of object. If they have a
"toString()", then it works in unaware JS code.

I've mocked this up and got it working.

2/ Pass in JS-ish RDFterms - e.g. [A]

This exposes the fact the arguments are RDF terms, with datatypes and
differentiates between URIs and literals. The function writer is more
aware of RDF, such as URIs (NamedNodes in the language of [A]).

For custom functions, I think there is less usefulness per se because
the function is not manipulating the RDF data, its working on values.
On the other hand, one way to handle RDF terms in JS is better.

3/ Pass in Jena Nodes or NodeValues.

This is the raw Jena-java-RDF view. The JS function writer is exposed
to Jena details has full power. Probably not meeting the goals of ease
of use for a non-Java writing person. NodeValue.toString means the JS
writer can be semi-unaware of this.


Another design point is whether the JS function can call back into Jena,
if at all (well, it can't be stopped in Nashorn but that does not make
it a good idea. The result if a good function is entire defined by its
inputs. No side effects, no state.)

For Fuseki:

We need a library of functions to be loaded and ideally compiled once.

We could get the JS scripts into the Context by reading from URL, or a
literal string in the file. There is a Context that is server-wide, in
the server section of a configuration file,and the one used for
execution can be added to with dataset-specific Context settings.

Andy

[A] https://github.com/rdfjs/representation-task-force
Post by Holger Knublauch
Post by Adrian Gschwend
Hi Holger,
Post by Holger Knublauch
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see
https://w3c.github.io/data-shapes/shacl-js/#js-functions
ok but that still expects the SPARQL engine to support that right?
This is implemented for Jena's SPARQL functions registry here
https://github.com/TopQuadrant/shacl/blob/master/src/main/java/org/topbraid/shacl/arq/SHACLJSARQFunction.java
Holger
Post by Adrian Gschwend
regards
Adrian
Bruno P. Kinoshita
2017-09-07 23:04:48 UTC
Permalink
Maybe Groovy could be an option as well? I like the idea of being able to customize Jena with Groovy + Grape's.
Whenever I use JavaScript I always rely on a few dependencies (e.g. moment.js). If we allowed users to grab extra dependencies with npm that would work as well I think.

In Jenkins, you can customize the server behaviour, and automate pretty much everything with Groovy. There is a Groovy console, and a few extension points where you can plug in Groovy code. The main advantage being that there is no translation between Groovy/Java objects. You simply call the Java objects from within Groovy. Which means we could even call utility classes and methods I think.

Bruno

From: Andy Seaborne <***@apache.org>
To: ***@jena.apache.org
Sent: Friday, 8 September 2017 1:20 AM
Subject: Re: SPIN support

The nice things about JS functions it allows extension without java
programming, whether writing the custom function in java or having to
rebuild Fuseki to get the java code in.  war files and jar+dependencies
(run with -jar) are sealed.  And it isn't as hard as embedded Java like
JSPs.

Just restricting the thoughts to SPARQL functions in JS - list of args
in, single value out. (So not property functions, not modifying the
graph data itself.)

I came up with 3 different designs of the function calling model based
on what is passed in.

1/ Pass in JS values - strings, numbers, booleans, null.

Convert the function arguments as passed in from SPARQL into JS native
things.  Also, convert the return to an XSD values by inspection. The JS
writer is insulated from RDF details.

Works really nicely for strings.
Everything could be a string, and the dynamic nature of JS will work
(caveat the overheads for simple functions called many, many times).
(Enough said about numbers in JS!)

Other items (URIs, bNodes) can be some kind of object.  If they have a
"toString()", then it works in unaware JS code.

I've mocked this up and got it working.

2/ Pass in JS-ish RDFterms - e.g. [A]

This exposes the fact the arguments are RDF terms, with datatypes and
differentiates between URIs and literals. The function writer is more
aware of RDF, such as URIs (NamedNodes in the language of [A]).

For custom functions, I think there is less usefulness per se because
the function is not manipulating the RDF data, its working on values.
On the other hand, one way to handle RDF terms in JS is better.

3/ Pass in Jena Nodes or NodeValues.

This is the raw Jena-java-RDF view.  The JS function writer is exposed
to Jena details has full power. Probably not meeting the goals of ease
of use for a non-Java writing person. NodeValue.toString means the JS
writer can be semi-unaware of this.


Another design point is whether the JS function can call back into Jena,
if at all (well, it can't be stopped in Nashorn but that does not make
it a good idea.  The result if a good function is entire defined by its
inputs.  No side effects, no state.)

For Fuseki:

We need a library of functions to be loaded and ideally compiled once.

We could get the JS scripts into the Context by reading from URL, or a
literal string in the file. There is a Context that is server-wide, in
the server section of a configuration file,and the one used for
execution can be added to with dataset-specific Context settings.

    Andy

[A] https://github.com/rdfjs/representation-task-force
Post by Holger Knublauch
Post by Adrian Gschwend
Hi Holger,
Post by Holger Knublauch
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see
https://w3c.github.io/data-shapes/shacl-js/#js-functions
ok but that still expects the SPARQL engine to support that right?
This is implemented for Jena's SPARQL functions registry here
https://github.com/TopQuadrant/shacl/blob/master/src/main/java/org/topbraid/shacl/arq/SHACLJSARQFunction.java
Holger
Post by Adrian Gschwend
regards
Adrian
Andy Seaborne
2017-09-08 17:49:20 UTC
Permalink
Once the machinery for one language is there, adding other is easy if
the language has a javax.script.ScriptEngineManager (JSR 223). Groovy
does. That means the custom functions can be loaded and run without
the static compile/load steps for the customisations needed to get
stuff into the server jar. Just ensure the language is on the
classpath.

You can call java from javascript in nashorn; it does some nice
idiomatic stuff like "obj.getProp()" is "obj.prop". And being highly
dynamic with reflection, a bit difficult to trace mistakes!

It is only a bit more complicated to create java objects but you can
do that too.

Andy

On 8 September 2017 at 00:04, Bruno P. Kinoshita
Post by Bruno P. Kinoshita
Maybe Groovy could be an option as well? I like the idea of being able to customize Jena with Groovy + Grape's.
Whenever I use JavaScript I always rely on a few dependencies (e.g. moment.js). If we allowed users to grab extra dependencies with npm that would work as well I think.
In Jenkins, you can customize the server behaviour, and automate pretty much everything with Groovy. There is a Groovy console, and a few extension points where you can plug in Groovy code. The main advantage being that there is no translation between Groovy/Java objects. You simply call the Java objects from within Groovy. Which means we could even call utility classes and methods I think.
Bruno
Sent: Friday, 8 September 2017 1:20 AM
Subject: Re: SPIN support
The nice things about JS functions it allows extension without java
programming, whether writing the custom function in java or having to
rebuild Fuseki to get the java code in. war files and jar+dependencies
(run with -jar) are sealed. And it isn't as hard as embedded Java like
JSPs.
Just restricting the thoughts to SPARQL functions in JS - list of args
in, single value out. (So not property functions, not modifying the
graph data itself.)
I came up with 3 different designs of the function calling model based
on what is passed in.
1/ Pass in JS values - strings, numbers, booleans, null.
Convert the function arguments as passed in from SPARQL into JS native
things. Also, convert the return to an XSD values by inspection. The JS
writer is insulated from RDF details.
Works really nicely for strings.
Everything could be a string, and the dynamic nature of JS will work
(caveat the overheads for simple functions called many, many times).
(Enough said about numbers in JS!)
Other items (URIs, bNodes) can be some kind of object. If they have a
"toString()", then it works in unaware JS code.
I've mocked this up and got it working.
2/ Pass in JS-ish RDFterms - e.g. [A]
This exposes the fact the arguments are RDF terms, with datatypes and
differentiates between URIs and literals. The function writer is more
aware of RDF, such as URIs (NamedNodes in the language of [A]).
For custom functions, I think there is less usefulness per se because
the function is not manipulating the RDF data, its working on values.
On the other hand, one way to handle RDF terms in JS is better.
3/ Pass in Jena Nodes or NodeValues.
This is the raw Jena-java-RDF view. The JS function writer is exposed
to Jena details has full power. Probably not meeting the goals of ease
of use for a non-Java writing person. NodeValue.toString means the JS
writer can be semi-unaware of this.
Another design point is whether the JS function can call back into Jena,
if at all (well, it can't be stopped in Nashorn but that does not make
it a good idea. The result if a good function is entire defined by its
inputs. No side effects, no state.)
We need a library of functions to be loaded and ideally compiled once.
We could get the JS scripts into the Context by reading from URL, or a
literal string in the file. There is a Context that is server-wide, in
the server section of a configuration file,and the one used for
execution can be added to with dataset-specific Context settings.
Andy
[A] https://github.com/rdfjs/representation-task-force
Post by Holger Knublauch
Post by Adrian Gschwend
Hi Holger,
Post by Holger Knublauch
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see
https://w3c.github.io/data-shapes/shacl-js/#js-functions
ok but that still expects the SPARQL engine to support that right?
This is implemented for Jena's SPARQL functions registry here
https://github.com/TopQuadrant/shacl/blob/master/src/main/java/org/topbraid/shacl/arq/SHACLJSARQFunction.java
Holger
Post by Adrian Gschwend
regards
Adrian
Adam Soroka
2017-09-08 18:20:08 UTC
Permalink
Could this be a thing for support in Fuseki? IOW, we don't want to package every possible scripting language with Fuseki, but people will want to use this kind of facility with it, so we might want to have some instructions available as to how to add your JSR 223 lang of choice.


ajs6f
Post by Andy Seaborne
Once the machinery for one language is there, adding other is easy if
the language has a javax.script.ScriptEngineManager (JSR 223). Groovy
does. That means the custom functions can be loaded and run without
the static compile/load steps for the customisations needed to get
stuff into the server jar. Just ensure the language is on the
classpath.
You can call java from javascript in nashorn; it does some nice
idiomatic stuff like "obj.getProp()" is "obj.prop". And being highly
dynamic with reflection, a bit difficult to trace mistakes!
It is only a bit more complicated to create java objects but you can
do that too.
Andy
On 8 September 2017 at 00:04, Bruno P. Kinoshita
Post by Bruno P. Kinoshita
Maybe Groovy could be an option as well? I like the idea of being able to customize Jena with Groovy + Grape's.
Whenever I use JavaScript I always rely on a few dependencies (e.g. moment.js). If we allowed users to grab extra dependencies with npm that would work as well I think.
In Jenkins, you can customize the server behaviour, and automate pretty much everything with Groovy. There is a Groovy console, and a few extension points where you can plug in Groovy code. The main advantage being that there is no translation between Groovy/Java objects. You simply call the Java objects from within Groovy. Which means we could even call utility classes and methods I think.
Bruno
Sent: Friday, 8 September 2017 1:20 AM
Subject: Re: SPIN support
The nice things about JS functions it allows extension without java
programming, whether writing the custom function in java or having to
rebuild Fuseki to get the java code in. war files and jar+dependencies
(run with -jar) are sealed. And it isn't as hard as embedded Java like
JSPs.
Just restricting the thoughts to SPARQL functions in JS - list of args
in, single value out. (So not property functions, not modifying the
graph data itself.)
I came up with 3 different designs of the function calling model based
on what is passed in.
1/ Pass in JS values - strings, numbers, booleans, null.
Convert the function arguments as passed in from SPARQL into JS native
things. Also, convert the return to an XSD values by inspection. The JS
writer is insulated from RDF details.
Works really nicely for strings.
Everything could be a string, and the dynamic nature of JS will work
(caveat the overheads for simple functions called many, many times).
(Enough said about numbers in JS!)
Other items (URIs, bNodes) can be some kind of object. If they have a
"toString()", then it works in unaware JS code.
I've mocked this up and got it working.
2/ Pass in JS-ish RDFterms - e.g. [A]
This exposes the fact the arguments are RDF terms, with datatypes and
differentiates between URIs and literals. The function writer is more
aware of RDF, such as URIs (NamedNodes in the language of [A]).
For custom functions, I think there is less usefulness per se because
the function is not manipulating the RDF data, its working on values.
On the other hand, one way to handle RDF terms in JS is better.
3/ Pass in Jena Nodes or NodeValues.
This is the raw Jena-java-RDF view. The JS function writer is exposed
to Jena details has full power. Probably not meeting the goals of ease
of use for a non-Java writing person. NodeValue.toString means the JS
writer can be semi-unaware of this.
Another design point is whether the JS function can call back into Jena,
if at all (well, it can't be stopped in Nashorn but that does not make
it a good idea. The result if a good function is entire defined by its
inputs. No side effects, no state.)
We need a library of functions to be loaded and ideally compiled once.
We could get the JS scripts into the Context by reading from URL, or a
literal string in the file. There is a Context that is server-wide, in
the server section of a configuration file,and the one used for
execution can be added to with dataset-specific Context settings.
Andy
[A] https://github.com/rdfjs/representation-task-force
Post by Holger Knublauch
Post by Adrian Gschwend
Hi Holger,
Post by Holger Knublauch
Is this about SPARQL functions or functions called from Java? For SPARQL
functions written in JavaScript, see
https://w3c.github.io/data-shapes/shacl-js/#js-functions
ok but that still expects the SPARQL engine to support that right?
This is implemented for Jena's SPARQL functions registry here
https://github.com/TopQuadrant/shacl/blob/master/src/main/java/org/topbraid/shacl/arq/SHACLJSARQFunction.java
Holger
Post by Adrian Gschwend
regards
Adrian
Loading...