Ignacio Tripodi
2016-03-20 17:16:18 UTC
Hello,
I was wondering if you had any minimum hardware suggestions for a
Jena/Fuseki Linux deployment, based on the number of triples used. Is there
a rough guideline for how much RAM should be available in production, as a
function of the size of the imported RDF file (currently less than 2Gb),
number of concurrent requests, etc?
The main use for this will be for wildcarded text searches using the Lucene
full-text index (basically, unfiltered queries using the reverse index). No
SPARQL Update needed. Other resource-intensive operations would be
refreshing the RDF data monthly, followed by rebuilding indices. The test
deployment on my 2012 MacBook runs queries in the order of tens of ms
(unless it's been idle for a while, then the first query is usually in the
order of hundreds of ms for some reason), so I imagine the hardware
requirements can't be that stringent. If it helps, I had to increase my
Java heap size to 3072Mb.
Thanks for any feedback you could provide!
I was wondering if you had any minimum hardware suggestions for a
Jena/Fuseki Linux deployment, based on the number of triples used. Is there
a rough guideline for how much RAM should be available in production, as a
function of the size of the imported RDF file (currently less than 2Gb),
number of concurrent requests, etc?
The main use for this will be for wildcarded text searches using the Lucene
full-text index (basically, unfiltered queries using the reverse index). No
SPARQL Update needed. Other resource-intensive operations would be
refreshing the RDF data monthly, followed by rebuilding indices. The test
deployment on my 2012 MacBook runs queries in the order of tens of ms
(unless it's been idle for a while, then the first query is usually in the
order of hundreds of ms for some reason), so I imagine the hardware
requirements can't be that stringent. If it helps, I had to increase my
Java heap size to 3072Mb.
Thanks for any feedback you could provide!