DuBStep Checkpoint 1

In this project, you will implement a simple SQL query evaluator with support for Select, Project, Join, Bag Union, and Aggregate operations.  You will receive a set of data files, schema information, and be expected to evaluate multiple SELECT queries over those data files.

Your code is expected to evaluate the SELECT statements on provided data, and produce output in a standardized form. Your code will be evaluated for both correctness and performance (in comparison to a naive evaluator based on iterators and nested-loop joins).

Parsing SQL

A parser converts a human-readable string into a structured representation of the program (or query) that the string describes. A fork of the JSQLParser open-source SQL parser (JSQLParser) will be provided for your use.  The JAR may be downloaded from:

https://odin.cse.buffalo.edu/software/jsqlparser/jsqlparser.jar

And documentation for the fork is available at

https://odin.cse.buffalo.edu/software/jsqlparser

You are not required to use this parser (i.e., you may write your own if you like). However, we will be testing your code on SQL that is guaranteed to parse with JSqlParser.

Basic use of the parser requires a java.io.Reader or java.io.InputStream from which the file data to be parsed (For example, a java.io.FileReader). Let's assume you've created one already (of either type) and called it inputFile.

CCJSqlParser parser = new CCJSqlParser(inputFile);
Statement statement;
while((statement = parser.Statement()) != null){
  // `statement` now has one of the several 
  // implementations of the Statement interface
}
// End-of-file.  Exit!

At this point, you'll need to figure out what kind of statement you're dealing with. For this project, we'll be working with Select and CreateTable. There are two ways to do this: Visitor classes, or the instanceof relation. We strongly recommend using instanceof:

if(statement instanceof Select) {
  Select selectStatement = (Select)statement;
  // handle the select
} else if(statement instanceof CreateTable) {
  // and so forth
}

Example

Expressions

JSQLParser includes an interface called Expression that represents a primitive-valued expression parse tree.  UB's JSQLParser fork includes a class called Eval that can be used to evaluate Expression objects. To use the Eval class, you will need to define a method for dereferencing Column objects.  For example, if I have a Map called tupleSchema that contains my tuple schema, and an ArrayList called tuple that contains the tuple I am currently evaluating, I might write:

public void PrimitiveValue eval(Column x){
  int colID = tupleSchema.get(x.getName());
  return tuple.get(colID);
}

After doing this, you can use Eval.eval() to evaluate any expression in the context of tuple.

Source Data

Because you are implementing a query evaluator and not a full database engine, there will not be any tables -- at least not in the traditional sense of persistent objects that can be updated and modified. Instead, you will be given a Table Schema and a CSV File with the instance in it. To keep things simple, we will use the CREATE TABLE statement to define a relation's schema. To reiterate, CREATE TABLE statements only appear to give you a schema. You do not need to allocate any resources for the table in reaction to a CREATE TABLE statement -- Simply save the schema that you are given for later use. Sql types (and their corresponding java types) that will be used in this project are as follows:

SQL Type Java Equivalent
string StringValue
varchar StringValue
char StringValue
int LongValue
decimal DoubleValue
date DateValue

In addition to the schema, you will be given a data directory containing multiple data files who's names correspond to the table names given in the CREATE TABLE statements. For example, let's say that you see the following statement in your query file:

CREATE TABLE R(A int, B int, C int);

That means that the data directory contains a data file called 'R.dat' that might look like this:

1|1|5
1|2|6
2|3|7

Each line of text (see java.io.BufferedReader.readLine()) corresponds to one row of data. Each record is delimited by a vertical pipe '|' character.  Integers and floats are stored in a form recognized by Java’s Long.parseLong() and Double.parseDouble() methods. Dates are stored in YYYY-MM-DD form, where YYYY is the 4-digit year, MM is the 2-digit month number, and DD is the 2-digit date. Strings are stored unescaped and unquoted and are guaranteed to contain no vertical pipe characters.

Queries

Your code is expected to support both aggregate and non-aggregate queries with the following features.  Keep in mind that this is only a minimum requirement.

Output

Your code is expected output query results in the same format as the input data:

Example Queries and Data

These are only examples.  Your code will be expected to handle these queries, as well as others.

The SQL implementation used by TPC-H differs in a few subtle ways from the implementation used by JSqlParser.  Minor structural rewrites to the queries in the specification document will be required:

Queries that conform to the specifications for this project include: Q1, Q3, Q5, Q6, Q8*, Q9, Q10, Q12*, Q14*, Q15*, Q19* (Asterisks mean that the query doesn't meet the spec as written, but can easily be rewritten into one that does)

Code Submission

As before, all .java files in the src directory at the root of your repository will be compiled (and linked against JSQLParser). Also as before, the class dubstep.Main will be invoked with the following arguments:

For example:

$> ls data
R.dat
S.dat
T.dat
$> cat data/R.dat
1|1|5
1|2|6
2|3|7
$> cat query.sql
CREATE TABLE R(A int, B int, C int)
SELECT B, C FROM R WHERE A = 1
$> java -cp build:jsqlparser.jar dubstep.Main --data data query.sql
1|5
2|6

Once again, the data directory contains files named table name.dat where table name is the name used in a CREATE TABLE statement. Notice the effect of CREATE TABLE statements is not to create a new file, but simply to link the given schema to an existing .dat file. These files use vertical-pipe (’|’) as a field delimiter, and newlines (’\n’) as record delimiters.

The testing environment is configured with the Sun JDK version 1.8.

Grading

Your code will be subjected to a sequence of test cases, most of which are provided in the project code (though different data will be used). The NBA queries (in the examples given above) and TPC-H queries (under the constraints listed above) are both fair game. For TPC-H, a SF 0.1 (100MB dataset will be used). Time constraints are based on the reference implementation for Checkpoint 1.

Query Max Credit Fast Time (full credit) Slow Time (75% credit) Cutoff Time (50% credit) Reference Time
NBA Q1,Q2,Q3,Q4 1 point5 s15 s30 s1.5-1.8 s
TPCH Q1, Q6 1 point5 s15 s30 s1.4-1.5 s
TPCH Q3 2 points200 s300 s420 s170 s
TPCH Q12 2 points30 s60 s90 s16 s

Producing the correct result on the test cluster, beating the fast time for each query will earn you full credit. Beating the slow and cutoff times will earn you 75% or 50% credit, respectively. Your query will be terminated if it runs slower than the cutoff time. The runtime of the reference implementation time is also given. Your overall project grade will be your total score for each of the individual components.  

Additionally, there will be a per-query leader-board for all groups who manage to get full credit on the overall assignment. Good luck.


This page last updated 2024-12-09 13:28:25 -0500