DBToaster
Scala Code Generation
Warning: This API is subject to changes in future releases.

Note: To compile and run queries using the Scala backend requires the Scala compiler. Please refer to Installation for more details.

1. Compiling and running a query

DBToaster generates a JAR file for a query when using the -l scala and -c <file> switches:

$> cat examples/queries/simple/rst.sql CREATE STREAM R(A int, B int) FROM FILE 'examples/data/simple/r.dat' LINE DELIMITED csv; CREATE STREAM S(B int, C int) FROM FILE 'examples/data/simple/s.dat' LINE DELIMITED csv; CREATE STREAM T(C int, D int) FROM FILE 'examples/data/simple/t.dat' LINE DELIMITED csv; SELECT sum(A*D) AS AtimesD FROM R,S,T WHERE R.B=S.B AND S.C=T.C; $> bin/dbtoaster -c test.jar -l scala examples/queries/simple/rst.sql

The command above compiles the query to test.jar, which can be run as follows:

$> java -classpath "test.jar:lib/dbt_scala/*" ddbt.gen.Dbtoaster Java 1.7.0_45, Scala 2.10.3 Time: 0.008s (30/0) ATIMESD: 306

After processing all insertions and deletions, the final result is printed.

Note for Windows users: When running compiled Scala programs under Cygwin or Windows directly, one should use Windows-style classpath separators (i.e., semicolons). For instance:

$> java -cp ".\test.jar;.\lib\dbt_scala\akka-actor_2.10-2.2.3.jar;.\lib\dbt_scala\dbtoaster_2.10-2.1-lms.jar;.\lib\dbt_scala\scala-library-2.10.2.jar;.\lib\dbt_scala\config-1.0.2.jar" ddbt.gen.Dbtoaster Java 1.7.0_65, Scala 2.10.2 Time: 0.002s (30/0) ATIMESD: 306

2. Scala API Guide

In the previous example, we used the standard main function to test the query. However, to use the query in real applications, it has to be run from within an application.

The following listing shows a simple example application that communicates with the query class. The communication between the application and the query class is handled using akka.

package org.dbtoaster import ddbt.gen._ import ddbt.lib.Messages._ import akka.actor._ object ExampleApp { def main(args: Array[String]) { val system = ActorSystem("mySystem") val q = system.actorOf(Props[Dbtoaster], "Query") // Send events q ! TupleEvent(0, TupleInsert, "R", List(5L, 2L)) q ! TupleEvent(1, TupleInsert, "S", List(2L, 3L)) q ! TupleEvent(2, TupleInsert, "T", List(3L, 4L)) // Retrieve result val to=akka.util.Timeout(1L << 42) val result = scala.concurrent.Await.result(akka.pattern.ask(q, EndOfStream)(to), to.duration).asInstanceOf[(StreamStat,List[Any])] println("Result: " + result._2(0)) system.shutdown } }

This example first creates an ActorSystem and then launches the query actor. The events are sent to the query actor using TupleEvent messages with the following structure:

Argument Comment
ord : Int Order number of the event.
op : TupleOp TupleInsert for insertion, TupleDelete for deletion.
stream : String Name of the stream as it appears in the SQL file.
data : List[Any] The values of the tuple being inserted into/deleted from the stream.

To retrieve the final result, an EndOfStream message is sent to the query actor. Alternatively the intermediate result of a query can be retrieved using a GetSnapshot message with the following structure:

Argument Comment
view : List[Int] List of maps that a snapshot is taken of.

Assuming that the example code has been saved as example.scala, it can be compiled with:

$> scalac -classpath "rst.jar:lib/dbt_scala/*" -d example.jar example.scala

It can then be launched with the following command:

$> java -classpath "rst.jar:lib/dbt_scala/*:example.jar" org.dbtoaster.ExampleApp Result: 20

3. Generated Code Reference

The Scala code generator generates a single file containing an object and an actor for a query. Both of them are called Query by default if no other name has been specified using the -n switch.

The code generated for the previous example looks as follows:

package ddbt.gen import ddbt.lib._ ... // Query object used for standalone binaries object Query { import Helper._ def execute(args:Array[String],f:List[Any]=>Unit) = ... def main(args:Array[String]) { execute(args,(res:List[Any])=>{ println("ATIMESD:\n"+M3Map.toStr(res(0))+"\n") }) } } // Query actor class Query extends Actor { import ddbt.lib.Messages._ import ddbt.lib.Functions._ // Maps/singletons that hold intermediate results var ATIMESD = 0L val ATIMESD_mT1 = M3Map.make[Long,Long](); ... // Triggers def onAddR(r_a:Long, r_b:Long) { ATIMESD += (ATIMESD_mR1.get(r_b) * r_a); ATIMESD_mT1_mR1.slice(0,r_b).foreach { (k1,v1) => val atimesd_mtt_c = k1._2; ATIMESD_mT1.add(atimesd_mtt_c,(v1 * r_a)); } ATIMESD_mS1.add(r_b,r_a); } def onDelR(r_a:Long, r_b:Long) { ... } ... def onDelT(t_c:Long, t_d:Long) { ... } def onSystemReady() { } ... def receive = { case TupleEvent(ord,TupleInsert,"R",List(v0:Long,v1:Long)) => if (t1>0 && (tN&127)==0) { val t=System.nanoTime; if (t>t1) { t1=t; tS=1; context.become(receive_skip) } else tN+=1 } else tN+=1; onAddR(v0,v1) ... case StreamInit(timeout) => onSystemReady(); t0=System.nanoTime; if (timeout>0) t1=t0+timeout*1000000L case EndOfStream | GetSnapshot(_) => t1=System.nanoTime; sender ! (StreamStat(t1-t0,tN,tS),List(ATIMESD)) } }

3.1. The query object

The query object contains the code used by the standalone binary to execute the query. Its execute method reads from the input streams specified in the query file and sends them to the query actor. The main method calls execute and prints the result when all tuples have been processed.

3.2. The query actor

The actual query processor lives in the query actor. Events like tuple insertions and deletions are communicated to the actor using actor messages as described previously. The receive method routes events to the appropriate trigger method. For every stream R, there is an insertion onAddR and a deletion trigger onDelR. These trigger methods are responsible of updating the intermediate result. The map and singleton data structures at the top of the actor hold the intermediate result.

The onSystemReady trigger is responsible of loading static information (CREATE TABLE statements in the query file) before the actual processing begins.

The EndOfStream message is sent from the event source when it is exhausted. The query actor replies to this message with the current processing statistics (processing time, number of tuples processed, number of tuples skipped) and one or multiple query results.

The GetSnapshot message can be used by an application to access the intermediate result. The query actor replies to this message with the current processing statistics and the results that the message asks for.

The whole process is guarded by a timeout. If the timeout is reached, the actor will stop to process tuples.

3.3. Partial materialization

Some of the work involved in maintaining the results of a query can be saved by performing partial materialization and only materialize the result when requested (i.e. when all tuples have been processed). This behaviour is especially desirable when the rate of querying the results is lower than the rate of updates, and can be enabled through the -F EXPRESSIVE-TLQS command line flag.

Below is an example of a query where partial materialization is indeed beneficial (this query can be found as examples/queries/simple/r_lif_of_count.sql in the DBToaster download).

CREATE STREAM R(A int, B int) FROM FILE 'examples/data/tiny/r.dat' LINE DELIMITED csv (); SELECT r2.C FROM ( SELECT r1.A, COUNT(*) AS C FROM R r1 GROUP BY r1.A ) r2;

When compiling this query with -F EXPRESSIVE-TLQS, the generated code now has functions representing top-level results:

def COUNT() = { val mCOUNT = M3Map.make[Long,Long]() val agg1 = M3Map.temp[Long,Long]() COUNT_1_E1_1.foreach { (r2_a,v1) => val l1 = COUNT_1_E1_1.get(r2_a); agg1.add(l1,(if (v1 != 0) 1L else 0L)); } agg1.foreach { (r2_c,v2) => mCOUNT.add(r2_c,v2) } mCOUNT }