* [#Configure 4store 配置] * [#load Load performance] * [#allieload Allie upload] * [#pdbjload PDBJ upload] * [#uniprotload Uniprot upload] * [#ddbjload DDBJ upload] * [#Sparql Sparql query performance] * [#alliequery Allie query performance] * [#pdbjquery PDBJ query performance] * [#uniprotquery Uniprot query performance] * [#ddbjquery DDBJ query performance] === 4store 配置 === #Configure {{{ $ cd $4STORE_HOME/bin $ ./4s-backend-setup allie $ ./4s-backend allie ./4s-import -v allie --format ntriples datapath --model http://myURI.com }}} Configuration consideration (refer to [http://4store.org/]):[[BR]] Here specify the cluster and segmentation values: [[BR]] 4s-backend-setup --node 0 --cluster 1 --segments 4 demo The number of segments should be a power of 2, parallelisation depends on segmentation. As a rule of thumb try a power of 2 close to twice as many segments as there are physical CPUs or CPU cores on the system, but depending on the workload you may find less or more work better. === Load Performance === #load === Allie upload === #allieload Approach 1: Default setting (2 segments) About 12 minutes Approach 2: 8 segments About 13 minutes Segment setting makes no too much difference. === PDBJ upload === #pdbjload Over 4 days(4.45 days) === Uniprot upload === #uniprotload === DDBJ upload === #ddbjload === Sparql query performance === #Sparql We did the query test by executing the whole query mix (composed of the query sequence) five times in every Sparql endpoint, and then get the average time cost of every query. === Allie query performance === #alliequery === PDBJ query performance === #pdbjquery === Uniprot query performance === #uniprotquery === DDBJ query performance === #