バージョン 33 から バージョン 34 における更新: ~FederatedBench

差分発生行の前後
無視リスト:
更新日時:
2015/01/09 17:42:42 (10 年 前)
更新者:
wu
コメント:

--

凡例:

変更なし
追加
削除
変更
  • ~FederatedBench

    v33 v34  
    1717    * [#approach Method] 
    1818 
    19     * Result&Conclusion[wiki:result => Result&Conclusion] 
     19    * Result[wiki:result => Result] 
     20 
     21    * [#conclusion Conclusion] 
    2022 
    2123 
     
    3638We use five real biological SPARQL endpoints,and designed five basic queries,considering the number of  really queried endpoints, the triple patterns (varying from 4 to 9), and the number of results(from 5 to 11000). And we  rewrite query 3 and 5 with “limit 100” clause. To keep a stable server and network environment, we sequentially execute a query for  all engines, and repeat it five times. Finally we remove the biggest value and calculate the average of other four values.  To test the performance when users do federated 1.1 queries in an endpoint directly instead of using a federated query engine,  we rewrite  all five queries  with service keywords and change the order of two service clauses, and execute the query in one of five endpoints. 
    3739 
     40=== Conclusion ===#conclusion 
     411. Now although many SPARQL endpoints support SPARQL 1.1 query, they can not take place of federated query engines. 
     422. FedX shows good performance both on its ease of use and better response. 
     433. All of these systems could finish a light query. 
     444. Neither FedX nor ADERIS needs a pre-computed statistics information, which make them easy to use. However ADERIS queries the predicate information of all the endpoints on-the-fly query produce a big cost.  
     455. SPLENDID shows better performance except FedX , and need pre-computed predicates and other statistic information. 
     46ANAPSID is the only one who use the non-java platform. It shows a little weakness on parsing a query, which make its performance can not be well  measured. 
     47