バージョン 33 から バージョン 34 における更新: ~FederatedBench
- 更新日時:
- 2015/01/09 17:42:42 (10 年 前)
凡例:
- 変更なし
- 追加
- 削除
- 変更
-
~FederatedBench
v33 v34 17 17 * [#approach Method] 18 18 19 * Result&Conclusion[wiki:result => Result&Conclusion] 19 * Result[wiki:result => Result] 20 21 * [#conclusion Conclusion] 20 22 21 23 … … 36 38 We use five real biological SPARQL endpoints,and designed five basic queries,considering the number of really queried endpoints, the triple patterns (varying from 4 to 9), and the number of results(from 5 to 11000). And we rewrite query 3 and 5 with “limit 100” clause. To keep a stable server and network environment, we sequentially execute a query for all engines, and repeat it five times. Finally we remove the biggest value and calculate the average of other four values. To test the performance when users do federated 1.1 queries in an endpoint directly instead of using a federated query engine, we rewrite all five queries with service keywords and change the order of two service clauses, and execute the query in one of five endpoints. 37 39 40 === Conclusion ===#conclusion 41 1. Now although many SPARQL endpoints support SPARQL 1.1 query, they can not take place of federated query engines. 42 2. FedX shows good performance both on its ease of use and better response. 43 3. All of these systems could finish a light query. 44 4. Neither FedX nor ADERIS needs a pre-computed statistics information, which make them easy to use. However ADERIS queries the predicate information of all the endpoints on-the-fly query produce a big cost. 45 5. SPLENDID shows better performance except FedX , and need pre-computed predicates and other statistic information. 46 ANAPSID is the only one who use the non-java platform. It shows a little weakness on parsing a query, which make its performance can not be well measured. 47