Discussion:
SPARQL with UNION returning TDBTransactionException
George News
2017-02-27 18:18:59 UTC
Permalink
Hi,

I have a SELECT SPARQL query similar to the one below:

|PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs:
<http://www.w3.org/2000/01/rdf-schema#> SELECT (count(distinct ?o) as
?count_o) (count(distinct ?device) as ?count_devices) WHERE { {?o
rdf:type/rdfs:subClassOf CLASS1 .} UNION {?device
rdf:type/rdfs:subClassOf CLASS2 .} } |

If I execute it as it is, I get

|org.apache.jena.tdb.transaction.TDBTransactionException: Not in a
transaction |

However if I execute it like

|PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs:
<http://www.w3.org/2000/01/rdf-schema#> SELECT (count(distinct ?o) as
?count_observations) WHERE { {?o rdf:type/rdfs:subClassOf CLASS1 .} } |

everything is working ok. It doesn’t matter if I use one or the other
part of the union, as it works ok independently but not together.

Any idea? Any help is more than welcome.

Find below part of the code:

|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |

​
George News
2017-02-27 18:23:05 UTC
Permalink
Sorry I forgot to include the full exception

Caused by: org.apache.jena.tdb.transaction.TDBTransactionException: Not in a transaction
at org.apache.jena.tdb.transaction.DatasetGraphTransaction.get(DatasetGraphTransaction.java:117)
at org.apache.jena.tdb.transaction.DatasetGraphTransaction.get(DatasetGraphTransaction.java:50)
at org.apache.jena.sparql.core.DatasetGraphWrapper.getR(DatasetGraphWrapper.java:61)
at org.apache.jena.sparql.core.DatasetGraphWrapper.find(DatasetGraphWrapper.java:146)
at org.apache.jena.sparql.core.GraphView.graphBaseFind(GraphView.java:121)
at org.apache.jena.graph.impl.GraphBase.find(GraphBase.java:255)
at org.apache.jena.sparql.path.eval.PathEngine.graphFind2(PathEngine.java:205)
at org.apache.jena.sparql.path.eval.PathEngine.graphFind(PathEngine.java:189)
at org.apache.jena.sparql.path.eval.PathEngine.graphFind(PathEngine.java:171)
at org.apache.jena.sparql.path.eval.PathEngine.doOne(PathEngine.java:92)
at org.apache.jena.sparql.path.eval.PathEvaluator.visit(PathEvaluator.java:57)
at org.apache.jena.sparql.path.P_Link.visit(P_Link.java:37)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:68)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:74)
at org.apache.jena.sparql.path.eval.PathEngine.eval(PathEngine.java:75)
at org.apache.jena.sparql.path.eval.PathEngineSPARQL.ALP_1(PathEngineSPARQL.java:119)
at org.apache.jena.sparql.path.eval.PathEngineSPARQL.doZeroOrMore(PathEngineSPARQL.java:92)
at org.apache.jena.sparql.path.eval.PathEvaluator.visit(PathEvaluator.java:115)
at org.apache.jena.sparql.path.P_ZeroOrMore1.visit(P_ZeroOrMore1.java:43)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:68)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:74)
at org.apache.jena.sparql.path.eval.PathEval.eval(PathEval.java:37)
at org.apache.jena.sparql.path.PathLib.evalGroundedPath(PathLib.java:166)
at org.apache.jena.sparql.path.PathLib.execTriplePath(PathLib.java:133)
at org.apache.jena.sparql.path.PathLib.execTriplePath(PathLib.java:108)
at org.apache.jena.sparql.engine.iterator.QueryIterPath.nextStage(QueryIterPath.java:47)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:108)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.atlas.iterator.Iter$2.hasNext(Iter.java:265)
at org.apache.jena.atlas.iterator.RepeatApplyIterator.hasNext(RepeatApplyIterator.java:45)
at org.apache.jena.tdb.solver.SolverLib$IterAbortable.hasNext(SolverLib.java:195)
at org.apache.jena.atlas.iterator.Iter$2.hasNext(Iter.java:265)
at org.apache.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:53)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:101)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:101)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterGroup$1.initializeIterator(QueryIterGroup.java:86)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.init(IteratorDelayedInitialization.java:40)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.hasNext(IteratorDelayedInitialization.java:50)
at org.apache.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:53)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistinct.java:104)
at org.apache.jena.sparql.engine.iterator.QueryIterDistinct.hasNextBinding(QueryIterDistinct.java:70)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at java.util.Iterator.forEachRemaining(Iterator.java:115)
at org.apache.jena.atlas.data.DataBag.addAll(DataBag.java:94)
at org.apache.jena.sparql.engine.iterator.QueryIterSort$SortedBindingIterator.initializeIterator(QueryIterSort.java:84)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.init(IteratorDelayedInitialization.java:40)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.hasNext(IteratorDelayedInitialization.java:50)
at org.apache.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:53)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:39)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:39)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:74)
at org.apache.jena.sparql.engine.ResultSetCheckCondition.hasNext(ResultSetCheckCondition.java:55)
Post by George News
Hi,
<http://www.w3.org/2000/01/rdf-schema#> SELECT (count(distinct ?o) as
?count_o) (count(distinct ?device) as ?count_devices) WHERE { {?o
rdf:type/rdfs:subClassOf CLASS1 .} UNION {?device
rdf:type/rdfs:subClassOf CLASS2 .} } |
If I execute it as it is, I get
|org.apache.jena.tdb.transaction.TDBTransactionException: Not in a
transaction |
However if I execute it like
<http://www.w3.org/2000/01/rdf-schema#> SELECT (count(distinct ?o) as
?count_observations) WHERE { {?o rdf:type/rdfs:subClassOf CLASS1 .} } |
everything is working ok. It doesn’t matter if I use one or the other
part of the union, as it works ok independently but not together.
Any idea? Any help is more than welcome.
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |

George News
2017-02-27 19:08:07 UTC
Permalink
More information... I have also noticed that if I don't close the
transaction (dataset.end()) it works. But, are the transactions garbage
collected?

What is the different between using UNION and not using it internally
for Jena? the other option is using RewindableResultSet, which somehow I
want to avoid. But if this is the solution, I will go though all my
ResultSets twice, one for retrieval (Rewindable creation) and another
for analysing the data.
Post by George News
Sorry I forgot to include the full exception
Caused by: org.apache.jena.tdb.transaction.TDBTransactionException: Not in a transaction
at org.apache.jena.tdb.transaction.DatasetGraphTransaction.get(DatasetGraphTransaction.java:117)
at org.apache.jena.tdb.transaction.DatasetGraphTransaction.get(DatasetGraphTransaction.java:50)
at org.apache.jena.sparql.core.DatasetGraphWrapper.getR(DatasetGraphWrapper.java:61)
at org.apache.jena.sparql.core.DatasetGraphWrapper.find(DatasetGraphWrapper.java:146)
at org.apache.jena.sparql.core.GraphView.graphBaseFind(GraphView.java:121)
at org.apache.jena.graph.impl.GraphBase.find(GraphBase.java:255)
at org.apache.jena.sparql.path.eval.PathEngine.graphFind2(PathEngine.java:205)
at org.apache.jena.sparql.path.eval.PathEngine.graphFind(PathEngine.java:189)
at org.apache.jena.sparql.path.eval.PathEngine.graphFind(PathEngine.java:171)
at org.apache.jena.sparql.path.eval.PathEngine.doOne(PathEngine.java:92)
at org.apache.jena.sparql.path.eval.PathEvaluator.visit(PathEvaluator.java:57)
at org.apache.jena.sparql.path.P_Link.visit(P_Link.java:37)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:68)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:74)
at org.apache.jena.sparql.path.eval.PathEngine.eval(PathEngine.java:75)
at org.apache.jena.sparql.path.eval.PathEngineSPARQL.ALP_1(PathEngineSPARQL.java:119)
at org.apache.jena.sparql.path.eval.PathEngineSPARQL.doZeroOrMore(PathEngineSPARQL.java:92)
at org.apache.jena.sparql.path.eval.PathEvaluator.visit(PathEvaluator.java:115)
at org.apache.jena.sparql.path.P_ZeroOrMore1.visit(P_ZeroOrMore1.java:43)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:68)
at org.apache.jena.sparql.path.eval.PathEval.eval$(PathEval.java:74)
at org.apache.jena.sparql.path.eval.PathEval.eval(PathEval.java:37)
at org.apache.jena.sparql.path.PathLib.evalGroundedPath(PathLib.java:166)
at org.apache.jena.sparql.path.PathLib.execTriplePath(PathLib.java:133)
at org.apache.jena.sparql.path.PathLib.execTriplePath(PathLib.java:108)
at org.apache.jena.sparql.engine.iterator.QueryIterPath.nextStage(QueryIterPath.java:47)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:108)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.atlas.iterator.Iter$2.hasNext(Iter.java:265)
at org.apache.jena.atlas.iterator.RepeatApplyIterator.hasNext(RepeatApplyIterator.java:45)
at org.apache.jena.tdb.solver.SolverLib$IterAbortable.hasNext(SolverLib.java:195)
at org.apache.jena.atlas.iterator.Iter$2.hasNext(Iter.java:265)
at org.apache.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:53)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:101)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.makeNextStage(QueryIterRepeatApply.java:101)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:65)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterGroup$1.initializeIterator(QueryIterGroup.java:86)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.init(IteratorDelayedInitialization.java:40)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.hasNext(IteratorDelayedInitialization.java:50)
at org.apache.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:53)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterProcessBinding.hasNextBinding(QueryIterProcessBinding.java:66)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterConvert.hasNextBinding(QueryIterConvert.java:58)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIterDistinct.getInputNextUnseen(QueryIterDistinct.java:104)
at org.apache.jena.sparql.engine.iterator.QueryIterDistinct.hasNextBinding(QueryIterDistinct.java:70)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at java.util.Iterator.forEachRemaining(Iterator.java:115)
at org.apache.jena.atlas.data.DataBag.addAll(DataBag.java:94)
at org.apache.jena.sparql.engine.iterator.QueryIterSort$SortedBindingIterator.initializeIterator(QueryIterSort.java:84)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.init(IteratorDelayedInitialization.java:40)
at org.apache.jena.atlas.iterator.IteratorDelayedInitialization.hasNext(IteratorDelayedInitialization.java:50)
at org.apache.jena.sparql.engine.iterator.QueryIterPlainWrapper.hasNextBinding(QueryIterPlainWrapper.java:53)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:39)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.iterator.QueryIteratorWrapper.hasNextBinding(QueryIteratorWrapper.java:39)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:114)
at org.apache.jena.sparql.engine.ResultSetStream.hasNext(ResultSetStream.java:74)
at org.apache.jena.sparql.engine.ResultSetCheckCondition.hasNext(ResultSetCheckCondition.java:55)
Post by George News
Hi,
<http://www.w3.org/2000/01/rdf-schema#> SELECT (count(distinct ?o) as
?count_o) (count(distinct ?device) as ?count_devices) WHERE { {?o
rdf:type/rdfs:subClassOf CLASS1 .} UNION {?device
rdf:type/rdfs:subClassOf CLASS2 .} } |
If I execute it as it is, I get
|org.apache.jena.tdb.transaction.TDBTransactionException: Not in a
transaction |
However if I execute it like
<http://www.w3.org/2000/01/rdf-schema#> SELECT (count(distinct ?o) as
?count_observations) WHERE { {?o rdf:type/rdfs:subClassOf CLASS1 .} } |
everything is working ok. It doesn’t matter if I use one or the other
part of the union, as it works ok independently but not together.
Any idea? Any help is more than welcome.
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |

Andy Seaborne
2017-02-27 23:12:44 UTC
Permalink
Unreadable.

-------------------------

Probably you are passing the result stream out of the transaction.

Reading a ResultSet requires reading the dataset and must be inside a
transaction when the calls to hasNext/next are happening.

Andy
Post by George News
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |
George News
2017-02-27 23:58:40 UTC
Permalink
But the funny thing is that in both cases (with and without UNION) the code is the same. I know I close the transaction, but I don’t understand why in one case it works and in the other no.

I would like to ask another thing: In a @requestscope is the reading transaction automatically close when finished the REST processing or is it mandatory to call dataset.end()?

I don't knw why example code is unreadable. I have perfectly seen the code.

Thanks a lot
Jorge

Sent from jlanza_lumia820

From: Andy Seaborne
Sent: martes, 28 de febrero de 2017 0:12
To: ***@jena.apache.org
Subject: Re: SPARQL with UNION returning TDBTransactionException

Unreadable.

-------------------------

Probably you are passing the result stream out of the transaction.

Reading a ResultSet requires reading the dataset and must be inside a
transaction when the calls to hasNext/next are happening.

Andy
Post by George News
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |
Andy Seaborne
2017-02-28 09:51:31 UTC
Permalink
What people on the list receive is:

https://lists.apache.org/api/source.lua/8d17e5299cbbe585e7d45ff8ce156021162a9e16fe77818d8f7cff6c@%3Cusers.jena.apache.org%3E

Looks like HTML to text conversion.
But the funny thing is that in both cases (with and without UNION) the code is the same. I know I close the transaction, but I don’t understand why in one case it works and in the other no.
Probably you close the transaction before the resultset is all consumed.

Some work is done on execSelect so internally, the use of the dataset
can finish early.

UNION has two branches calculated separately.

It depends on the real query.

Andy
I don't knw why example code is unreadable. I have perfectly seen the code.
Thanks a lot
Jorge
Sent from jlanza_lumia820
From: Andy Seaborne
Sent: martes, 28 de febrero de 2017 0:12
Subject: Re: SPARQL with UNION returning TDBTransactionException
Unreadable.
-------------------------
Probably you are passing the result stream out of the transaction.
Reading a ResultSet requires reading the dataset and must be inside a
transaction when the calls to hasNext/next are happening.
Andy
Post by George News
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |
a***@virginia.edu
2017-02-28 12:13:51 UTC
Permalink
Try using a service like Github Gist. Is it possible that only one of the legs of the query has results in it? Have you confirmed that _neither_ of the legs executed separately shows the problem?

ajs6f
Post by Andy Seaborne
Looks like HTML to text conversion.
But the funny thing is that in both cases (with and without UNION) the code is the same. I know I close the transaction, but I don’t understand why in one case it works and in the other no.
Probably you close the transaction before the resultset is all consumed.
Some work is done on execSelect so internally, the use of the dataset can finish early.
UNION has two branches calculated separately.
It depends on the real query.
Andy
I don't knw why example code is unreadable. I have perfectly seen the code.
Thanks a lot
Jorge
Sent from jlanza_lumia820
From: Andy Seaborne
Sent: martes, 28 de febrero de 2017 0:12
Subject: Re: SPARQL with UNION returning TDBTransactionException
Unreadable.
-------------------------
Probably you are passing the result stream out of the transaction.
Reading a ResultSet requires reading the dataset and must be inside a
transaction when the calls to hasNext/next are happening.
Andy
Post by George News
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query query =
QueryFactory.create(queryString); dataset.begin(ReadWrite.READ); try {
QueryExecution qExec = QueryExecutionFactory.create(query, getModel());
// System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult result; if
(query.isSelectType()) { result = new SparqlResult(qExec.execSelect(),
qExec); } else if (query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if (query.isAskType())
{ result = new SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally { dataset.end(); }
private Map<String, Integer> getSummaryStatistics() { String queryString
= "THE_ONE"; Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try (SparqlResult
result = gts.executeSparql(queryString)) { ResultSet resultSet =
(ResultSet) result.getResult(); while (resultSet.hasNext()) {
QuerySolution sol = resultSet.next(); int devices =
sol.get("count_devices").asLiteral().getInt(); int observations
=sol.get("count_o").asLiteral().getInt(); statistics.put("devices",
resources); statistics.put("o", observations); } } catch
(SparqlExecutionException e) { // TODO Auto-generated catch block
e.printStackTrace(); } return statistics; } |
George News
2017-02-28 15:07:08 UTC
Permalink
Post by a***@virginia.edu
Try using a service like Github Gist.
I guess you say Gist for posting code. I can also use pastebin next time.
Post by a***@virginia.edu
Is it possible that only one of
the legs of the query has results in it? Have you confirmed that
_neither_ of the legs executed separately shows the problem?
Yes. As @Andy said UNION are executed separately. So the problem seems
to be that one branch works while the other no.
Post by a***@virginia.edu
ajs6f
Looks like HTML to text conversion.
Post by a***@virginia.edu
Post by Andy Seaborne
On 27/02/17 23:58, George News wrote: But the funny thing is that
in both cases (with and without UNION) the code is the same. I
know I close the transaction, but I don’t understand why in one
case it works and in the other no.
Probably you close the transaction before the resultset is all consumed.
Some work is done on execSelect so internally, the use of the
dataset can finish early.
UNION has two branches calculated separately.
It depends on the real query.
Andy
reading transaction automatically close when finished the REST
processing or is it mandatory to call dataset.end()?
I don't knw why example code is unreadable. I have perfectly seen the code.
Thanks a lot Jorge
Sent from jlanza_lumia820
TDBTransactionException
Unreadable.
-------------------------
Probably you are passing the result stream out of the
transaction.
Reading a ResultSet requires reading the dataset and must be
inside a transaction when the calls to hasNext/next are
happening.
Andy
On 27/02/17 18:18, George News wrote: |public SparqlResult
executeSparql(String sparql) throws SparqlExecutionException {
String queryString = sparql; Query query =
QueryFactory.create(queryString);
dataset.begin(ReadWrite.READ); try { QueryExecution qExec =
QueryExecutionFactory.create(query, getModel()); //
System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult
result; if (query.isSelectType()) { result = new
SparqlResult(qExec.execSelect(), qExec); } else if
(query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if
(query.isAskType()) { result = new
SparqlResult(qExec.execAsk(), qExec); } else if
(query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally {
dataset.end(); } private Map<String, Integer>
getSummaryStatistics() { String queryString = "THE_ONE";
Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try
(SparqlResult result = gts.executeSparql(queryString)) {
ResultSet resultSet = (ResultSet) result.getResult(); while
(resultSet.hasNext()) { QuerySolution sol = resultSet.next();
int devices = sol.get("count_devices").asLiteral().getInt();
int observations =sol.get("count_o").asLiteral().getInt();
statistics.put("devices", resources); statistics.put("o",
observations); } } catch (SparqlExecutionException e) { // TODO
Auto-generated catch block e.printStackTrace(); } return
statistics; } |
Sandor Kopacsi
2017-02-28 17:21:47 UTC
Permalink
Dear List-members!

I would like to delete the default graph from TDB using the web
interface of Fuseki, but for some reason it doesn't work.

In the config file of Fuseki the serviceUpdate is allowed, and I can
perform the CLEAR DEFAULT and DROP DEFAULT SPARQL updates successfully,
but the default graph is still there, and contains triples.

When I run Fuseki in the memory with the same setting and update above,
I can simply delete the default graph.

What is wrong?

Thanks and best regards,
Sandor
--
Dr. Sandor Kopacsi
IT Software Designer

Vienna University Computer Center
A. Soroka
2017-02-28 18:10:19 UTC
Permalink
Do you have the default graph set up as the union graph?

---
A. Soroka
The University of Virginia Library
Post by Sandor Kopacsi
Dear List-members!
I would like to delete the default graph from TDB using the web interface of Fuseki, but for some reason it doesn't work.
In the config file of Fuseki the serviceUpdate is allowed, and I can perform the CLEAR DEFAULT and DROP DEFAULT SPARQL updates successfully, but the default graph is still there, and contains triples.
When I run Fuseki in the memory with the same setting and update above, I can simply delete the default graph.
What is wrong?
Thanks and best regards,
Sandor
--
Dr. Sandor Kopacsi
IT Software Designer
Vienna University Computer Center
Sandor Kopacsi
2017-03-01 10:25:38 UTC
Permalink
Thank you very much, that was the problem.

I did not change so far this default setting in the config file:

<#dataset> rdf:type tdb:DatasetTDB ;
tdb:location "/var/www/fuseki/tdb" ;
tdb:unionDefaultGraph true ;

I set it now to false (or better commented out), and now the DROP
DEFAULT SPARQL update works as expected.

Thanks again.

Best regards,
Sandor
Post by A. Soroka
Do you have the default graph set up as the union graph?
---
A. Soroka
The University of Virginia Library
Post by Sandor Kopacsi
Dear List-members!
I would like to delete the default graph from TDB using the web interface of Fuseki, but for some reason it doesn't work.
In the config file of Fuseki the serviceUpdate is allowed, and I can perform the CLEAR DEFAULT and DROP DEFAULT SPARQL updates successfully, but the default graph is still there, and contains triples.
When I run Fuseki in the memory with the same setting and update above, I can simply delete the default graph.
What is wrong?
Thanks and best regards,
Sandor
--
Dr. Sandor Kopacsi
IT Software Designer
Vienna University Computer Center
--
Dr. Sandor Kopacsi
IT Software Designer

Vienna University Computer Center
Universitätsstraße 7 (NIG)
A-1010 Vienna

Phone: +43-1-4277-14176
Mobile: +43-664-60277-14176
George News
2017-02-28 15:05:21 UTC
Permalink
Ok. So the list is text based ;) Fine for next time.
Post by Andy Seaborne
Looks like HTML to text conversion.
Post by George News
But the funny thing is that in both cases (with and without UNION)
the code is the same. I know I close the transaction, but I don’t
understand why in one case it works and in the other no.
Probably you close the transaction before the resultset is all
consumed.
Some work is done on execSelect so internally, the use of the
dataset can finish early.
UNION has two branches calculated separately.
I was supposing that. At the end I have included the dataset in my own
ResultSetClosable class, so this way I can call dataset.end() on
ResultSet close. It seems that it works this way, it is not the more
elegant one, but it works.

In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
Post by Andy Seaborne
It depends on the real query.
Andy
Post by George News
reading transaction automatically close when finished the REST
processing or is it mandatory to call dataset.end()?
I don't knw why example code is unreadable. I have perfectly seen the code.
Thanks a lot Jorge
Sent from jlanza_lumia820
TDBTransactionException
Unreadable.
-------------------------
Probably you are passing the result stream out of the transaction.
Reading a ResultSet requires reading the dataset and must be inside
a transaction when the calls to hasNext/next are happening.
Andy
Post by George News
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query
query = QueryFactory.create(queryString);
dataset.begin(ReadWrite.READ); try { QueryExecution qExec =
QueryExecutionFactory.create(query, getModel()); //
System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult
result; if (query.isSelectType()) { result = new
SparqlResult(qExec.execSelect(), qExec); } else if
(query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if
(query.isAskType()) { result = new SparqlResult(qExec.execAsk(),
qExec); } else if (query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally {
dataset.end(); } private Map<String, Integer>
getSummaryStatistics() { String queryString = "THE_ONE";
Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try
(SparqlResult result = gts.executeSparql(queryString)) {
ResultSet resultSet = (ResultSet) result.getResult(); while
(resultSet.hasNext()) { QuerySolution sol = resultSet.next(); int
devices = sol.get("count_devices").asLiteral().getInt(); int
observations =sol.get("count_o").asLiteral().getInt();
statistics.put("devices", resources); statistics.put("o",
observations); } } catch (SparqlExecutionException e) { // TODO
Auto-generated catch block e.printStackTrace(); } return
statistics; } |
A. Soroka
2017-02-28 15:12:06 UTC
Permalink
Have you looked at:

https://jena.apache.org/documentation/rdfconnection/
https://jena.apache.org/documentation/txn/

---
A. Soroka
The University of Virginia Library
Post by George News
In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
George News
2017-02-28 15:42:20 UTC
Permalink
Post by A. Soroka
https://jena.apache.org/documentation/rdfconnection/
https://jena.apache.org/documentation/txn/
I didn't know about the new Java 8 interface. Pretty cool ;)

BTW in https://jena.apache.org/documentation/txn/transactions_api.html I
think there is an error in the second piece of code, isn't it missing a
dataset.end() to finish the previously open transaction? Or are the read
transactions automatically closed?
Post by A. Soroka
---
A. Soroka
The University of Virginia Library
Post by George News
In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
A. Soroka
2017-02-28 15:44:40 UTC
Permalink
I think the assumption in that example is just that you might not be done with the transaction, but Andy can clarify if needed.

---
A. Soroka
The University of Virginia Library
Post by George News
Post by A. Soroka
https://jena.apache.org/documentation/rdfconnection/
https://jena.apache.org/documentation/txn/
I didn't know about the new Java 8 interface. Pretty cool ;)
BTW in https://jena.apache.org/documentation/txn/transactions_api.html I
think there is an error in the second piece of code, isn't it missing a
dataset.end() to finish the previously open transaction? Or are the read
transactions automatically closed?
Post by A. Soroka
---
A. Soroka
The University of Virginia Library
Post by George News
In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
Andy Seaborne
2017-02-28 15:59:20 UTC
Permalink
Post by George News
Ok. So the list is text based ;) Fine for next time.
Post by Andy Seaborne
Looks like HTML to text conversion.
Post by George News
But the funny thing is that in both cases (with and without UNION)
the code is the same. I know I close the transaction, but I don’t
understand why in one case it works and in the other no.
Probably you close the transaction before the resultset is all consumed.
Some work is done on execSelect so internally, the use of the
dataset can finish early.
UNION has two branches calculated separately.
I was supposing that. At the end I have included the dataset in my own
ResultSetClosable class, so this way I can call dataset.end() on
ResultSet close. It seems that it works this way, it is not the more
elegant one, but it works.
In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
That's what ResultSetFormatter.toList or ResultSetFactory.copyResults do.

Andy
Post by George News
Post by Andy Seaborne
It depends on the real query.
Andy
Post by George News
reading transaction automatically close when finished the REST
processing or is it mandatory to call dataset.end()?
I don't knw why example code is unreadable. I have perfectly seen the code.
Thanks a lot Jorge
Sent from jlanza_lumia820
TDBTransactionException
Unreadable.
-------------------------
Probably you are passing the result stream out of the transaction.
Reading a ResultSet requires reading the dataset and must be inside
a transaction when the calls to hasNext/next are happening.
Andy
Post by George News
|public SparqlResult executeSparql(String sparql) throws
SparqlExecutionException { String queryString = sparql; Query
query = QueryFactory.create(queryString);
dataset.begin(ReadWrite.READ); try { QueryExecution qExec =
QueryExecutionFactory.create(query, getModel()); //
System.out.println(qExec.getQuery().serialize()); //
System.out.println(qExec.getQuery().toString()); SparqlResult
result; if (query.isSelectType()) { result = new
SparqlResult(qExec.execSelect(), qExec); } else if
(query.isDescribeType()) { result = new
SparqlResult(qExec.execDescribe(), qExec); } else if
(query.isAskType()) { result = new SparqlResult(qExec.execAsk(),
qExec); } else if (query.isConstructType()) { result = new
SparqlResult(qExec.execConstruct(), qExec); } else { throw new
SparqlExecutionException("Unsupported query type: " +
query.getQueryType()); } return result; } finally {
dataset.end(); } private Map<String, Integer>
getSummaryStatistics() { String queryString = "THE_ONE";
Map<String, Integer> statistics = new HashMap<>();
GlobalTripleStore gts = new GlobalTripleStore(); try
(SparqlResult result = gts.executeSparql(queryString)) {
ResultSet resultSet = (ResultSet) result.getResult(); while
(resultSet.hasNext()) { QuerySolution sol = resultSet.next(); int
devices = sol.get("count_devices").asLiteral().getInt(); int
observations =sol.get("count_o").asLiteral().getInt();
statistics.put("devices", resources); statistics.put("o",
observations); } } catch (SparqlExecutionException e) { // TODO
Auto-generated catch block e.printStackTrace(); } return
statistics; } |
George News
2017-02-28 16:19:37 UTC
Permalink
[... snip ...]
Post by George News
In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
That's what ResultSetFormatter.toList or ResultSetFactory.copyResults do.
+1 but I don't want to iterate twice over the same list of data.
Although at the end I think it would be the easiest way to do it and
avoid extra issues ;)
Andy
Rob Vesse
2017-02-28 16:35:29 UTC
Permalink
What you need to remember is that query execution is streaming lazy evaluation. So each time you call hasNext()/next() the minimum amount of work needed to compute the next answer, If any, is done. When you convert the ResultSet into a list you’re doing all that work up front once. When you iterate over the copied Data it is just a static list of the previously computed results. Iterating over a static list has very little overhead computationally, the downside is that it may have a lot of memory overhead if you have a extremely large results. This is a standard space time trade-off and you have to decide what makes sense in your applications context.

Rob
[... snip ...]
Post by George News
In this sense, one option for including in Jena could be to enable the
option to create a QueryExecution with transaction included, in order to
avoid some issues like that.
That's what ResultSetFormatter.toList or ResultSetFactory.copyResults do.
+1 but I don't want to iterate twice over the same list of data.
Although at the end I think it would be the easiest way to do it and
avoid extra issues ;)
Andy
George News
2017-02-28 17:26:07 UTC
Permalink
Post by Rob Vesse
What you need to remember is that query execution is streaming lazy
evaluation. So each time you call hasNext()/next() the minimum amount
of work needed to compute the next answer, If any, is done. When you
convert the ResultSet into a list you’re doing all that work up front
once. When you iterate over the copied Data it is just a static list
of the previously computed results. Iterating over a static list has
very little overhead computationally, the downside is that it may
have a lot of memory overhead if you have a extremely large results.
This is a standard space time trade-off and you have to decide what
makes sense in your applications context.
Then between ResultSetFormatter.toList or ResultSetFactory.copyResults
the different is none, isn't it?

Sorry for the big thread.

Jorge
Post by Rob Vesse
Rob
[... snip ...]
Post by George News
In this sense, one option for including in Jena could be to
enable the option to create a QueryExecution with transaction
included, in order to avoid some issues like that.
That's what ResultSetFormatter.toList or
ResultSetFactory.copyResults do.
+1 but I don't want to iterate twice over the same list of data.
Although at the end I think it would be the easiest way to do it and
avoid extra issues ;)
Andy
Andy Seaborne
2017-02-28 19:37:49 UTC
Permalink
Post by George News
Post by Rob Vesse
What you need to remember is that query execution is streaming lazy
evaluation. So each time you call hasNext()/next() the minimum amount
of work needed to compute the next answer, If any, is done. When you
convert the ResultSet into a list you’re doing all that work up front
once. When you iterate over the copied Data it is just a static list
of the previously computed results. Iterating over a static list has
very little overhead computationally, the downside is that it may
have a lot of memory overhead if you have a extremely large results.
This is a standard space time trade-off and you have to decide what
makes sense in your applications context.
Then between ResultSetFormatter.toList or ResultSetFactory.copyResults
the different is none, isn't it?
Different return types, both disconnect the results from the dataset.

One creates a List<QuerySolution>, the other returns a new
ResultSetRewindable.

It is better to not pass out the results - instead, pass in some code to
handle the results (e.g. using java8 lambda).

Andy
Post by George News
Sorry for the big thread.
Jorge
Post by Rob Vesse
Rob
[... snip ...]
Post by George News
In this sense, one option for including in Jena could be to
enable the option to create a QueryExecution with transaction
included, in order to avoid some issues like that.
That's what ResultSetFormatter.toList or
ResultSetFactory.copyResults do.
+1 but I don't want to iterate twice over the same list of data.
Although at the end I think it would be the easiest way to do it and
avoid extra issues ;)
Andy
Loading...