SEAGENT is a framework which facilities development of Multi-agent Systems in the semantic web environment.
WoDQA is a federated linked data query engine which can be used to query the datasets in Linked Open Data Cloud or in enterprise linked data clouds.
Ege University Linked Open Data Portal is the first Linked Open Data attempt in Turkey, which is initiated by SEAGENT research group.

To be able to evaluate WoDQA with the FedBench Cross Domain and Life Sciences queries, we develop a test module that executes queries using WoDQA. To use that module you should follow some steps described below:

                <dependencies>
                    <dependency>
                        <groupId>Seagent</groupId>
                        <artifactId>VOIDExtractor</artifactId>
                        <version>0.0.1-20130415</version>
                    </dependency>
                 </dependencies>
                 <repositories>
                     <repository>
                         <id>seagent</id>
                         <name>Seagent Repository</name>
                         <url> http://seagent.ege.edu.tr/etmen/snapshots</url>
                     </repository>
                 </repositories>

 

  • Now you should extract VOIDExtractor-0.0.1-20130415.jar into a folder. If you add this jar using maven, you should visit /.m2/Seagent/VOIDExtractor to get the jar file. 
  • You should open /allVoids/cleansed folder hierarchy into your project and copy "09datasets, ..., 73datasets" folders extracted from VOIDExtractor-0.0.1-20130415.jar under /allVoids/cleansed directory.
  • Also you must load FedBench datasets into your local RDF server and open each one as a SPARQL endpoint. After that you should change "void:sparqlEndpoint" property in each VOID document with the related one you opened. 
  • VOID-Dataset Name table is seen below:
VOID  Dataset Name  Sparql Endpoint
datasets64 DBbpedia http://localhost:7000/sparql/
datasets65  LinkedMDB  http://localhost:2500/sparql/
datasets66 NyTimes  http://localhost:9000/sparql/
datasets67 Kegg  http://localhost:4000/sparql/
datasets68 Chebi http://localhost:3000/sparql/
datasets69  Drugbank http://localhost:8000/sparql/
datasets70  Geonames http://localhost:2000/sparql/
datasets71  SwDogFood http://localhost:5500/sparql/
datasets72  Jamendo http://localhost:5000/sparql/
    • Now you can evaluate a FedBench query with the sample code described below:

// create evaluation test object
EvaluationTest evaluationTest = new EvaluationTest();
// initialize the environment
EvaluationTest.beforeClass();
// evaluate queries you want...
evaluationTest.crossDomainQuery4Test();
// break down the environment
EvaluationTest.afterClass();

    • You can evaluate whole FedBench queries by extending EvaluationTest.class and execute extended evaluation class as Junit test:

public class ExtendedEvaluationTest extends EvaluationTest {

}

 There is a sample evlauation results has been listed below:

 1-a) Comparison of FedX, SPLENDID and WoDQA according to query evaluation times as getting first result is listed in table below:

 

  FedX SPLENDID WoDQA
CD-1 15 110 63
CD-2 330 80 49
CD-3 109 103 88
CD-4 100 85 72
CD-5 97 101 79
CD-6 281 10000 172
CD-7 324 3000 42
LS-1 47 200 20
LS-2 16 400 24
LS-3 1470 20000 399
LS-4 1 800 51
LS-5 480 21000 217
LS-6 34 1000 44
LS-7 481 20000 183

1-b) Graphic of the first-result-table is shown below:

2-a) Comparison of FedX, SPLENDID and WoDQA according to request counts sent to the endpoint during query execution phase is listed in table below:

  FedX SPLENDID WoDQA
CD-1 7 26 2
CD-2 2 2 2
CD-3 23 2 2
CD-4 38 4 2
CD-5 18 2 2
CD-6 185 10 5
CD-7 138 5 2
LS-1 1 1 1
LS-2 18 26 6
LS-3 2059 2 2
LS-4 3 2 2
LS-5 458 8 4
LS-6 45 8 2
LS-7 485 4 2

2-b) Graphic of the request counts is shown below:

3-a) Comparison of FedX, SPLENDID and WoDQA according to selected dataset counts is listed in table below:

  FedX SPLENDID WoDQA
CD-1 9 9 2
CD-2 2 2 2
CD-3 8 2 2
CD-4 8 3 2
CD-5 8 2 2
CD-6 6 6 3
CD-7 8 2 2
LS-1 1 1 1
LS-2 9 9 4
LS-3 8 2 2
LS-4 2 2 2
LS-5 5 5 3
LS-6 5 5 2
LS-7 3 3 2

3-b) Graphic of the selected dataset counts is shown below:

4-a) Comparison of FedX, SPLENDID and WoDQA according to ask counts while dataset selection phase is listed in table below:

  FedX SPLENDID WoDQA
CD-1 18 27 8
CD-2 27 10 8
CD-3 45 2 4
CD-4 36 5 7
CD-5 36 1 3
CD-6 36 1 14
CD-7 36 1 3
LS-1 18 0 2
LS-2 18 27 4
LS-3 45 1 4
LS-4 63 2 7
LS-5 54 1 14
LS-6 45 2 12
LS-7 45 1 7

4-b) Graphic of the ask counts is shown below:

5-a) Dataset selection times of WoDQA according to increasing dataset count:

  WoDQA-9 WoDQA-20 WoDQA-30 WoDQA-40 WoDQA-50 WoDQA-60 WoDQA-73
CD-1 133 998 1485 1622 1725 1800 1906
CD-2 99 1017 1356 1418 1545 2049 1772
CD-3 72 148 331 341 350 376 393
CD-4 104 482 1215 2356 2916 3102 4129
CD-5 75 149 247 255 256 299 323
CD-6 131 2764 3750 6738 8810 9476 11731
CD-7 46 252 250 258 329 327 469
LS-1 20 34 126 126 124 132 127
LS-2 53 140 151 184 184 204 254
LS-3 46 64 161 165 173 204 226
LS-4 79 128 128 134 133 134 152
LS-5 120 1099 1335 1347 1591 1935 2828
LS-6 115 1106 1341 1358 1621 1938 2712
LS-7 74 87 92 92 84 82 403

5-b) Graphic of the increasing-dataset-count dataset selection times is shown below: